Date   

Re: User cannot do CF login when UAA is being updated

Yunata, Ricky <rickyy@...>
 

Hi Amit, Jiang and Alexander,

Thank you for your suggestions. I have done the steps according to what Amit have specified.
I tried to change the flavour of UAA VMs, but I’m still getting the same error. I have attached the steps that I’ve done in the attachment.

Regards,
Ricky




From: Amit Gupta [mailto:agupta(a)pivotal.io]
Sent: Tuesday, 15 September 2015 9:10 AM
To: Discussions about Cloud Foundry projects and the system overall.
Subject: [cf-dev] Re: Re: User cannot do CF login when UAA is being updated

Hi Ricky,

My understanding is that you still need help, and the issues Jiang and Alexander raised are different. To avoid confusion, let's keep this thread focused on your issue.

Can you confirm that you have two UAA VMs in separate bosh jobs, separate AZs, etc. Can you confirm that when you roll the UAAs, only one goes down at a time? The simplest way to affect a roll is to change some trivial property in the manifest for your UAA jobs. If you're using v215, any of the properties referenced here will do:

https://github.com/cloudfoundry/cf-release/blob/v215/jobs/uaa/spec#L321-L335

You should confirm that only one UAA is down at a time, and comes back up before bosh moves on to updating the other UAA.

While this roll is happening, can you just do `CF_TRACE=true cf auth USERNAME PASSWORD` in a loop, and if you see one that fails, post the output, along with noting the state of the bosh deploy when the error happens.

Thanks,
Amit

On Mon, Sep 14, 2015 at 10:51 AM, Amit Gupta <agupta(a)pivotal.io<mailto:agupta(a)pivotal.io>> wrote:
Ricky, Jiang, Alexander, are the three of you working together? It's hard to tell since you've got Fujitsu, Gmail, and Altoros email addresses. Are you folks talking about the same issue with the same deployment, or three separate issues.

Ricky, if you still need assistance with your issue, please let us know.

On Mon, Sep 14, 2015 at 10:16 AM, Lomov Alexander <alexander.lomov(a)altoros.com<mailto:alexander.lomov(a)altoros.com>> wrote:
Yes, the problem is that postgresql database is stored on NFS that is restarted during nfs job update. I’m sure that you’ll be able to run updates without outage with several customizations.

It is hard to tell without knowing your environment, but in common case steps will be following:


1. Add additional instances to nfs job and customize it to make replications (for instance use this docs for release customization [1])
2. Make your NFS job to update sequently without our jobs updates in parallel (like it is done for postgresql [2])
3. Check your options in update section [3].

[1] https://help.ubuntu.com/community/HighlyAvailableNFS
[2] https://github.com/cloudfoundry/cf-release/blob/master/example_manifests/minimal-aws.yml#L115-L116
[3] https://github.com/cloudfoundry/cf-release/blob/master/example_manifests/minimal-aws.yml#L57-L62

On Sep 14, 2015, at 9:47 AM, Yitao Jiang <jiangyt.cn(a)gmail.com<mailto:jiangyt.cn(a)gmail.com>> wrote:

On upgrading the deployment, the uaa not working due the uaadb filesystem hangup.Under my environment , the nfs-wal-server's ip changed which causing uaadb,ccdb hang up. Hard reboot the uaadb, restart uaa service solve the issue.

Hopes can help you.

On Mon, Sep 14, 2015 at 2:13 PM, Yunata, Ricky <rickyy(a)fast.au.fujitsu.com<mailto:rickyy(a)fast.au.fujitsu.com>> wrote:
Hello,

I have a question regarding UAA in Cloud Foundry. I’m currently running Cloud Foundry on Openstack.
I have 2 availability zones and redundancy of the important VMs including UAA.
Whenever I do an upgrade of either stemcell or CF release, user will not be able to do CF login when when CF is updating UAA VM.
My question is, is this a normal behaviour? If I have redundant UAA VM, shouldn’t user still be able to still login to the apps even though it’s being updated?
I’ve done this test a few times, with different CF version and stemcells and all of them are giving me the same result. The latest test that I’ve done was to upgrade CF version from 212 to 215.
Has anyone experienced the same issue?

Regards,
Ricky
Disclaimer

The information in this e-mail is confidential and may contain content that is subject to copyright and/or is commercial-in-confidence and is intended only for the use of the above named addressee. If you are not the intended recipient, you are hereby notified that dissemination, copying or use of the information is strictly prohibited. If you have received this e-mail in error, please telephone Fujitsu Australia Software Technology Pty Ltd on + 61 2 9452 9000<tel:%2B%2061%202%209452%209000> or by reply e-mail to the sender and delete the document and all copies thereof.


Whereas Fujitsu Australia Software Technology Pty Ltd would not knowingly transmit a virus within an email communication, it is the receiver’s responsibility to scan all communication and any files attached for computer viruses and other defects. Fujitsu Australia Software Technology Pty Ltd does not accept liability for any loss or damage (whether direct, indirect, consequential or economic) however caused, and whether by negligence or otherwise, which may result directly or indirectly from this communication or any files attached.


If you do not wish to receive commercial and/or marketing email messages from Fujitsu Australia Software Technology Pty Ltd, please email unsubscribe(a)fast.au.fujitsu.com<mailto:unsubscribe(a)fast.au.fujitsu.com>




--

Regards,

Yitao
jiangyt.github.io<http://jiangyt.github.io/>



Disclaimer

The information in this e-mail is confidential and may contain content that is subject to copyright and/or is commercial-in-confidence and is intended only for the use of the above named addressee. If you are not the intended recipient, you are hereby notified that dissemination, copying or use of the information is strictly prohibited. If you have received this e-mail in error, please telephone Fujitsu Australia Software Technology Pty Ltd on + 61 2 9452 9000 or by reply e-mail to the sender and delete the document and all copies thereof.


Whereas Fujitsu Australia Software Technology Pty Ltd would not knowingly transmit a virus within an email communication, it is the receiver’s responsibility to scan all communication and any files attached for computer viruses and other defects. Fujitsu Australia Software Technology Pty Ltd does not accept liability for any loss or damage (whether direct, indirect, consequential or economic) however caused, and whether by negligence or otherwise, which may result directly or indirectly from this communication or any files attached.


If you do not wish to receive commercial and/or marketing email messages from Fujitsu Australia Software Technology Pty Ltd, please email unsubscribe(a)fast.au.fujitsu.com


Re: User cannot do CF login when UAA is being updated

Filip Hanik
 

Amit, see previous comment.

Postgresql database is stored on NFS that is restarted during nfs job
update.

UAA, while being up, is non functional while the NFS job is updated because
it can't get to the DB.



On Mon, Sep 14, 2015 at 5:09 PM, Amit Gupta <agupta(a)pivotal.io> wrote:

Hi Ricky,

My understanding is that you still need help, and the issues Jiang and
Alexander raised are different. To avoid confusion, let's keep this thread
focused on your issue.

Can you confirm that you have two UAA VMs in separate bosh jobs, separate
AZs, etc. Can you confirm that when you roll the UAAs, only one goes down
at a time? The simplest way to affect a roll is to change some trivial
property in the manifest for your UAA jobs. If you're using v215, any of
the properties referenced here will do:


https://github.com/cloudfoundry/cf-release/blob/v215/jobs/uaa/spec#L321-L335

You should confirm that only one UAA is down at a time, and comes back up
before bosh moves on to updating the other UAA.

While this roll is happening, can you just do `CF_TRACE=true cf auth
USERNAME PASSWORD` in a loop, and if you see one that fails, post the
output, along with noting the state of the bosh deploy when the error
happens.

Thanks,
Amit

On Mon, Sep 14, 2015 at 10:51 AM, Amit Gupta <agupta(a)pivotal.io> wrote:

Ricky, Jiang, Alexander, are the three of you working together? It's
hard to tell since you've got Fujitsu, Gmail, and Altoros email addresses.
Are you folks talking about the same issue with the same deployment, or
three separate issues.

Ricky, if you still need assistance with your issue, please let us know.

On Mon, Sep 14, 2015 at 10:16 AM, Lomov Alexander <
alexander.lomov(a)altoros.com> wrote:

Yes, the problem is that postgresql database is stored on NFS that is
restarted during nfs job update. I’m sure that you’ll be able to run
updates without outage with several customizations.

It is hard to tell without knowing your environment, but in common case
steps will be following:


1. Add additional instances to nfs job and customize it to make
replications (for instance use this docs for release customization [1])
2. Make your NFS job to update sequently without our jobs updates in
parallel (like it is done for postgresql [2])
3. Check your options in update section [3].


[1] https://help.ubuntu.com/community/HighlyAvailableNFS
[2]
https://github.com/cloudfoundry/cf-release/blob/master/example_manifests/minimal-aws.yml#L115-L116
[3]
https://github.com/cloudfoundry/cf-release/blob/master/example_manifests/minimal-aws.yml#L57-L62

On Sep 14, 2015, at 9:47 AM, Yitao Jiang <jiangyt.cn(a)gmail.com> wrote:

On upgrading the deployment, the uaa not working due the uaadb
filesystem hangup.Under my environment , the nfs-wal-server's ip changed
which causing uaadb,ccdb hang up. Hard reboot the uaadb, restart uaa
service solve the issue.

Hopes can help you.

On Mon, Sep 14, 2015 at 2:13 PM, Yunata, Ricky <
rickyy(a)fast.au.fujitsu.com> wrote:

Hello,



I have a question regarding UAA in Cloud Foundry. I’m currently running
Cloud Foundry on Openstack.

I have 2 availability zones and redundancy of the important VMs
including UAA.

Whenever I do an upgrade of either stemcell or CF release, user will
not be able to do CF login when when CF is updating UAA VM.

My question is, is this a normal behaviour? If I have redundant UAA VM,
shouldn’t user still be able to still login to the apps even though it’s
being updated?

I’ve done this test a few times, with different CF version and
stemcells and all of them are giving me the same result. The latest test
that I’ve done was to upgrade CF version from 212 to 215.

Has anyone experienced the same issue?



Regards,

Ricky
Disclaimer

The information in this e-mail is confidential and may contain content
that is subject to copyright and/or is commercial-in-confidence and is
intended only for the use of the above named addressee. If you are not the
intended recipient, you are hereby notified that dissemination, copying or
use of the information is strictly prohibited. If you have received this
e-mail in error, please telephone Fujitsu Australia Software Technology Pty
Ltd on + 61 2 9452 9000 or by reply e-mail to the sender and delete
the document and all copies thereof.

Whereas Fujitsu Australia Software Technology Pty Ltd would not
knowingly transmit a virus within an email communication, it is the
receiver’s responsibility to scan all communication and any files attached
for computer viruses and other defects. Fujitsu Australia Software
Technology Pty Ltd does not accept liability for any loss or damage
(whether direct, indirect, consequential or economic) however caused, and
whether by negligence or otherwise, which may result directly or indirectly
from this communication or any files attached.

If you do not wish to receive commercial and/or marketing email
messages from Fujitsu Australia Software Technology Pty Ltd, please email
unsubscribe(a)fast.au.fujitsu.com


--

Regards,

Yitao
jiangyt.github.io



Re: CF Release Scripts Moved

Joseph Palermo <jpalermo@...>
 

There should be a PR to the repo to fix these now.

On Monday, September 14, 2015, Michal Kuratczyk <mkuratczyk(a)pivotal.io>
wrote:

Hi,

Bosh-lite scripts still refer to the old paths:
https://github.com/cloudfoundry/bosh-lite/blob/master/bin/provision_cf#L31

https://github.com/cloudfoundry/bosh-lite/blob/master/bin/make_manifest_spiff#L29

Therefore the basic bosh-lite deployment fails at the moment.

Best regards,

On Fri, Sep 11, 2015 at 2:49 AM, Natalie Bennett <nbennett(a)pivotal.io
<javascript:_e(%7B%7D,'cvml','nbennett(a)pivotal.io');>> wrote:

CF Release top-level scripts have been moved. The new location is under
the `scripts/` folder.

Thanks,
CF OSS Release Integration Team


--
Michal Kuratczyk
Pivotal


Re: DEA/Warden staging error

kyle havlovitz <kylehav@...>
 

Here's the full dea_ng and warden debug logs:
https://gist.github.com/MrEnzyme/6dcc74174482ac62c1cf

Are there any other places I should look for logs?

On Mon, Sep 14, 2015 at 8:14 PM, CF Runtime <cfruntime(a)gmail.com> wrote:

That's not an error we normally get. It's not clear if the
staging_info.yml error is the source of the problem or an artifact of it.
Having more logs would allow us to speculate more.

Joseph & Dan
OSS Release Integration Team

On Mon, Sep 14, 2015 at 2:24 PM, kyle havlovitz <kylehav(a)gmail.com> wrote:

I have the cloudfoundry components built, configured and running on one
VM (not in BOSH), and when I push an app I'm getting a generic 'FAILED
StagingError' message after '-----> Downloaded app package (460K)'.

There's nothing in the logs for the dea/warden that seems suspect other
than these 2 things:


{
"timestamp": 1441985105.8883495,

"message": "Exited with status 1 (35.120s):
[[\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\",
\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\"],
\"/var/warden/containers/18vf956il5v/bin/iomux-link\", \"-w\",
\"/var/warden/containers/18vf956il5v/jobs/8/cursors\",
\"/var/warden/containers/18vf956il5v/jobs/8\"]",
"log_level": "warn",

"source": "Warden::Container::Linux",

"data": {

"handle": "18vf956il5v",

"stdout": "",

"stderr": ""

},

"thread_id": 69890836968240,

"fiber_id": 69890849112480,

"process_id": 17063,

"file":
"/opt/cloudfoundry/warden/warden/lib/warden/container/spawn.rb",
"lineno": 135,

"method": "set_deferred_success"

}



{
"timestamp": 1441985105.94083,

"message": "Exited with status 23 (0.023s):
[[\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\",
\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\"], \"rsync\",
\"-e\", \"/var/warden/containers/18vf956il5v/bin/wsh --socket
/var/warden/containers/18vf956il5v/run/wshd.sock --rsh\", \"-r\", \"-p\",
\"--links\", \"vcap(a)container:/tmp/staged/staging_info.yml\",
\"/tmp/dea_ng/staging/d20150911-17093-1amg6y8\"]",
"log_level": "warn",

"source": "Warden::Container::Linux",

"data": {

"handle": "18vf956il5v",

"stdout": "",

"stderr": "rsync: link_stat \"/tmp/staged/staging_info.yml\"
failed: No such file or directory (2)\nrsync error: some files/attrs were
not transferred (see previous errors) (code 23) at main.c(1655)
[Receiver=3.1.0]\nrsync: [Receiver] write error: Broken pipe (32)\n"
},

"thread_id": 69890836968240,

"fiber_id": 69890849112480,

"process_id": 17063,

"file":
"/opt/cloudfoundry/warden/warden/lib/warden/container/spawn.rb",
"lineno": 135,

"method": "set_deferred_success"

}


And I think the second error is just during cleanup, only failing because
the staging process didn't get far enough in to create the
'staging_info.yml'. The one about iomux-link exiting with status 1 is
pretty mysterious though and I have no idea what caused it. Does anyone
know why this might be happening?


Re: app auto-scaling in OSS CF contribution

Ronak Banka
 

Hi Dies,

App auto-scaling is much needed feature for CF OSS , lot of users want to
use this functionality .

Once on the incubator, roadmap can be discussed. Hope to to see it soon on
cf incubator .

Regards,
Ronak Banka
Rakuten, Inc.

On Tue, Sep 15, 2015 at 9:00 AM, Koper, Dies <diesk(a)fast.au.fujitsu.com>
wrote:

Hi,



At Fujitsu we’re developing app auto-scaling and are considering to
propose moving it to the cf incubator.

Before we start open-sourcing it, I wanted to ask if there is any interest
for this in the community, possibly even others working on or considering
to work on one who’d be interested to collaborate/align with us?



We’re looking at providing basic support for scaling up/down based on
metrics like CPU, request count, and a service broker to enable it for your
app.

We can share a detailed functional description for review in a few weeks.

Depending on priorities, interest and resources available we may add
functionality like sending an email notification in addition to/instead of
scaling, or scale based on other metrics (including app generated custom
metrics).

Either way, we want to make these things pluggable to allow people to
integrate it with their own (closed-source) monitoring agents or custom
actions.




I feel every PaaS comes with free app auto-scaling functionality (PCF,
Bluemix, OpenShift, AWS, …) so OSS CF deserves one too.



I have discussed this plan with Pivotal and they have encouraged me to
send this email to the list.



Please let me know if you have any questions.



Regards,

Dies Koper

diesk(a)fast.au.fujitsu.com



Question on Custom script working as part of PHP buildpack

Amishi Shah
 

Hi team,

I have a requirement to run a custom script (Configure OpenAM Web Policy Agent) as part of a PHP buildpack. The requirement is the Web Policy Agent should have OpenJDK configured before it runs.

Could anyone please share any thoughts on how I can achieve this requirement?

I tried it as part of the custom extension, but seems like it is not working.

Thanks,
Amishi Shah


Re: expected? doppler log "lost election for cluster leader"

Rohit Kumar
 

Hi Guangcai,

The log messages are coming from the syslog_drain_binder process which is
colocated with dopplers. The syslog_drain_binder is used to poll the
CloudController for active syslog drain URLs for apps. At any point we only
want one syslog_drain_binder to be active, so that the CloudController
doesn't get overloaded with requests. The election process is done to
ensure that.

To answer your question, yes these messages are expected. Secondly, the
syslog_drain_binders will run for election after a specified timeout has
expired. All of them try to create a key in etcd but only one succeeds and
becomes the leader. The exact logic can be found here
<https://github.com/cloudfoundry/loggregator/blob/develop/src/syslog_drain_binder/elector/elector.go#L38-L58>
.

Rohit

On Mon, Sep 14, 2015 at 1:10 AM, Guangcai Wang <guangcai.wang(a)gmail.com>
wrote:

Hi all,

I have 2 doppler instances. I found if one of doppler won election for
cluster leader, the other will frequently log "lost election for cluster
leader" as follows. Is it expected?


{"timestamp":1442214238.411536455,"process_id":7212,"source":"syslog_drain_binder","log_level":"info","message":"Elector:
'doppler_z1.0' lost election for cluster
leader.","data":null,"file":"/var/vcap/data/compile/syslog_drain_binder/loggregator/src/syslog_drain_binder/elector/elector.go","line":57,"method":"syslog_drain_binder/elector.(*Elector).RunForElection"}
{"timestamp":1442214253.724292278,"process_id":7212,"source":"syslog_drain_binder","log_level":"info","message":"Elector:
'doppler_z1.0' lost election for cluster
leader.","data":null,"file":"/var/vcap/data/compile/syslog_drain_binder/loggregator/src/syslog_drain_binder/elector/elector.go","line":57,"method":"syslog_drain_binder/elector.(*Elector).RunForElection"}
{"timestamp":1442214269.286961317,"process_id":7212,"source":"syslog_drain_binder","log_level":"info","message":"Elector:
'doppler_z1.0' lost election for cluster
leader.","data":null,"file":"/var/vcap/data/compile/syslog_drain_binder/loggregator/src/syslog_drain_binder/elector/elector.go","line":57,"method":"syslog_drain_binder/elector.(*Elector).RunForElection"}
{"timestamp":1442214284.720170259,"process_id":7212,"source":"syslog_drain_binder","log_level":"info","message":"Elector:
'doppler_z1.0' lost election for cluster
leader.","data":null,"file":"/var/vcap/data/compile/syslog_drain_binder/loggregator/src/syslog_drain_binder/elector/elector.go","line":57,"method":"syslog_drain_binder/elector.(*Elector).RunForElection"}
{"timestamp":1442214300.056922436,"process_id":7212,"source":"syslog_drain_binder","log_level":"info","message":"Elector:
'doppler_z1.0' lost election for cluster
leader.","data":null,"file":"/var/vcap/data/compile/syslog_drain_binder/loggregator/src/syslog_drain_binder/elector/elector.go","line":57,"method":"syslog_drain_binder/elector.(*Elector).RunForElection"}


I also want to know in which conditions/situations they will reelect for
cluster leader again.


Re: DEA/Warden staging error

CF Runtime
 

That's not an error we normally get. It's not clear if the staging_info.yml
error is the source of the problem or an artifact of it. Having more logs
would allow us to speculate more.

Joseph & Dan
OSS Release Integration Team

On Mon, Sep 14, 2015 at 2:24 PM, kyle havlovitz <kylehav(a)gmail.com> wrote:

I have the cloudfoundry components built, configured and running on one VM
(not in BOSH), and when I push an app I'm getting a generic 'FAILED
StagingError' message after '-----> Downloaded app package (460K)'.

There's nothing in the logs for the dea/warden that seems suspect other
than these 2 things:


{
"timestamp": 1441985105.8883495,

"message": "Exited with status 1 (35.120s):
[[\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\",
\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\"],
\"/var/warden/containers/18vf956il5v/bin/iomux-link\", \"-w\",
\"/var/warden/containers/18vf956il5v/jobs/8/cursors\",
\"/var/warden/containers/18vf956il5v/jobs/8\"]",
"log_level": "warn",

"source": "Warden::Container::Linux",

"data": {

"handle": "18vf956il5v",

"stdout": "",

"stderr": ""

},

"thread_id": 69890836968240,

"fiber_id": 69890849112480,

"process_id": 17063,

"file":
"/opt/cloudfoundry/warden/warden/lib/warden/container/spawn.rb",
"lineno": 135,

"method": "set_deferred_success"

}



{
"timestamp": 1441985105.94083,

"message": "Exited with status 23 (0.023s):
[[\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\",
\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\"], \"rsync\",
\"-e\", \"/var/warden/containers/18vf956il5v/bin/wsh --socket
/var/warden/containers/18vf956il5v/run/wshd.sock --rsh\", \"-r\", \"-p\",
\"--links\", \"vcap(a)container:/tmp/staged/staging_info.yml\",
\"/tmp/dea_ng/staging/d20150911-17093-1amg6y8\"]",
"log_level": "warn",

"source": "Warden::Container::Linux",

"data": {

"handle": "18vf956il5v",

"stdout": "",

"stderr": "rsync: link_stat \"/tmp/staged/staging_info.yml\"
failed: No such file or directory (2)\nrsync error: some files/attrs were
not transferred (see previous errors) (code 23) at main.c(1655)
[Receiver=3.1.0]\nrsync: [Receiver] write error: Broken pipe (32)\n"
},

"thread_id": 69890836968240,

"fiber_id": 69890849112480,

"process_id": 17063,

"file":
"/opt/cloudfoundry/warden/warden/lib/warden/container/spawn.rb",
"lineno": 135,

"method": "set_deferred_success"

}


And I think the second error is just during cleanup, only failing because
the staging process didn't get far enough in to create the
'staging_info.yml'. The one about iomux-link exiting with status 1 is
pretty mysterious though and I have no idea what caused it. Does anyone
know why this might be happening?


Re: [Bosh-lite] Can not recreate vm/job

CF Runtime
 

Hmm, do you know what version of bosh-lite you are using? And could you
also provide us with the output of `bosh status`?

Joseph & Dan
OSS Release Integration Team

On Fri, Sep 11, 2015 at 2:03 AM, Yitao Jiang <jiangyt.cn(a)gmail.com> wrote:

All,

i just recreate my router vm but failed with following execption.

root(a)bosh-lite:~# bosh -n -d /vagrant/manifests/cf-manifest.yml recreate
router_z1

Processing deployment manifest
------------------------------

Processing deployment manifest
------------------------------
You are about to recreate router_z1/0

Processing deployment manifest
------------------------------

Performing `recreate router_z1/0'...

Director task 128
Started preparing deployment
Started preparing deployment > Binding deployment. Done (00:00:00)
Started preparing deployment > Binding releases. Done (00:00:00)
Started preparing deployment > Binding existing deployment. Done
(00:00:01)
Started preparing deployment > Binding resource pools. Done (00:00:00)
Started preparing deployment > Binding stemcells. Done (00:00:00)
Started preparing deployment > Binding templates. Done (00:00:00)
Started preparing deployment > Binding properties. Done (00:00:00)
Started preparing deployment > Binding unallocated VMs. Done (00:00:00)
Started preparing deployment > Binding instance networks. Done (00:00:00)
Done preparing deployment (00:00:01)

Started preparing package compilation > Finding packages to compile.
Done (00:00:00)

Started preparing dns > Binding DNS. Done (00:00:00)

Started preparing configuration > Binding configuration. Done (00:00:02)

Started updating job api_z1 > api_z1/0. Failed: Attaching disk
'32a54912-9641-4c01-577c-99b09bb2d39c' to VM
'a5532a05-88e5-45aa-5022-ad4c6f81c4cc': Mounting persistent bind mounts
dir: Mounting disk specific persistent bind mount: Running command: 'mount
/var/vcap/store/cpi/disks/32a54912-9641-4c01-577c-99b09bb2d39c
/var/vcap/store/cpi/persistent_bind_mounts_dir/a5532a05-88e5-45aa-5022-ad4c6f81c4cc/32a54912-9641-4c01-577c-99b09bb2d39c
-o loop', stdout: '', stderr: 'mount: according to mtab
/var/vcap/store/cpi/disks/32a54912-9641-4c01-577c-99b09bb2d39c is already
mounted on
/var/vcap/store/cpi/persistent_bind_mounts_dir/a5532a05-88e5-45aa-5022-ad4c6f81c4cc/32a54912-9641-4c01-577c-99b09bb2d39c
as loop
': exit status 32 (00:10:40)

Error 100: Attaching disk '32a54912-9641-4c01-577c-99b09bb2d39c' to VM
'a5532a05-88e5-45aa-5022-ad4c6f81c4cc': Mounting persistent bind mounts
dir: Mounting disk specific persistent bind mount: Running command: 'mount
/var/vcap/store/cpi/disks/32a54912-9641-4c01-577c-99b09bb2d39c
/var/vcap/store/cpi/persistent_bind_mounts_dir/a5532a05-88e5-45aa-5022-ad4c6f81c4cc/32a54912-9641-4c01-577c-99b09bb2d39c
-o loop', stdout: '', stderr: 'mount: according to mtab
/var/vcap/store/cpi/disks/32a54912-9641-4c01-577c-99b09bb2d39c is already
mounted on
/var/vcap/store/cpi/persistent_bind_mounts_dir/a5532a05-88e5-45aa-5022-ad4c6f81c4cc/32a54912-9641-4c01-577c-99b09bb2d39c
as loop
': exit status 32




--

Regards,

Yitao
jiangyt.github.io


app auto-scaling in OSS CF contribution

Koper, Dies <diesk@...>
 

Hi,

At Fujitsu we're developing app auto-scaling and are considering to propose moving it to the cf incubator.
Before we start open-sourcing it, I wanted to ask if there is any interest for this in the community, possibly even others working on or considering to work on one who'd be interested to collaborate/align with us?

We're looking at providing basic support for scaling up/down based on metrics like CPU, request count, and a service broker to enable it for your app.
We can share a detailed functional description for review in a few weeks.
Depending on priorities, interest and resources available we may add functionality like sending an email notification in addition to/instead of scaling, or scale based on other metrics (including app generated custom metrics).
Either way, we want to make these things pluggable to allow people to integrate it with their own (closed-source) monitoring agents or custom actions.

I feel every PaaS comes with free app auto-scaling functionality (PCF, Bluemix, OpenShift, AWS, ...) so OSS CF deserves one too.

I have discussed this plan with Pivotal and they have encouraged me to send this email to the list.

Please let me know if you have any questions.

Regards,
Dies Koper
diesk(a)fast.au.fujitsu.com<mailto:diesk(a)fast.au.fujitsu.com>


Re: Warden: staging error when pushing app

CF Runtime
 

I think that the DEA will reap any warden containers it finds every 30
seconds. The containers in a grace period are containers it is still
keeping track of.

We're not sure about the error you're getting. If you provide the warden
and dea logs we might be able to trace through what is happening.

Joseph & Dan
OSS Release Integration Team

On Fri, Sep 11, 2015 at 2:44 PM, kyle havlovitz <kylehav(a)gmail.com> wrote:

I built all the components on my own and have them running on one VM, with
config changed from the bosh defaults as needed (I had already had this
working, but with an older CF version (203) and now i'm updating).

It seems like the app and buildpack are getting placed in the container
successfully, but the push never gets past the '-----> Downloaded app
package (460K)'.
I also tried creating a container with the warden client and it says it
was created successfully, then gets deleted by warden soon (about 30s)
afterward. I have the 'container_grace_time' set to 300, so I thought it
would be longer.

On Fri, Sep 11, 2015 at 5:07 PM, CF Runtime <cfruntime(a)gmail.com> wrote:

Hi Kyle,

I'm not sure what mean by "running it locally". Can you explain in more
detail how you've deployed your CF installation?

Zak + Dan

On Fri, Sep 11, 2015 at 1:12 PM, kyle havlovitz <kylehav(a)gmail.com>
wrote:

I also see this error in the DEA logs:
{"timestamp":1441999712.400334,"message":"Error copying files out of
container: command exited with
failure","log_level":"warn","source":"Staging","data":{"app_guid":"207dd67c-415a-4108-a31d-7e82590b1e28","task_id":"01db0928ddf14426b47e21459bf1e57a"},"thread_id":70184860078900,"fiber_id":70184883843040,"process_id":23523,"file":"/opt/cloudfoundry/dea_next/lib/dea/task.rb","lineno":126,"method":"rescue
in copy_out_request"}
{"timestamp":1441999712.4004574,"message":"staging.task-log.copying-out","log_level":"info","source":"Staging","data":{"app_guid":"207dd67c-415a-4108-a31d-7e82590b1e28","task_id":"01db0928ddf14426b47e21459bf1e57a","source":"/tmp/staged/logs/staging_task.log","destination":"/tmp/dea_ng/staging/d20150911-23523-ms2lax/staging_task.log"},"thread_id":70184860078900,"fiber_id":70184883400540,"process_id":23523,"file":"/opt/cloudfoundry/dea_next/lib/dea/staging/staging_task.rb","lineno":258,"method":"block
in promise_task_log"}

Could it be something wrong with the DEA's directory server? I don't
anything in its logs after it starts so maybe it isn't being used.

On Fri, Sep 11, 2015 at 1:55 PM, kyle havlovitz <kylehav(a)gmail.com>
wrote:

I'm not using bosh, just trying to run it locally. I can post the dea
and warden configs though:

dea.yml:

base_dir: /tmp/dea_ng


domain: local.example.com

logging:
file: /opt/cloudfoundry/logs/dea_ng.log

level: debug


loggregator:
router: "127.0.0.1:3456"

shared_secret: "secret"


resources:
memory_mb: 8000

memory_overcommit_factor: 1

disk_mb: 40960

disk_overcommit_factor: 1


nats_servers:
- nats://127.0.0.1:4222


pid_filename: /tmp/dea_ng.pid

warden_socket: /tmp/warden.sock

evacuation_delay_secs: 10

default_health_check_timeout: 120

index: 0

intervals:
heartbeat: 10

advertise: 5

router_register_in_seconds: 20


staging:
enabled: true

memory_limit_mb: 4096

disk_limit_mb: 6144

disk_inode_limit: 200000

cpu_limit_shares: 512

max_staging_duration: 900


instance:
disk_inode_limit: 200000

memory_to_cpu_share_ratio: 8

max_cpu_share_limit: 256

min_cpu_share_limit: 1


dea_ruby: /usr/bin/ruby

# For Go-based directory server
directory_server:

protocol: 'http'

v1_port: 4385

v2_port: 5678

file_api_port: 1234

streaming_timeout: 10

logging:

file: /opt/cloudfoundry/logs/dea_dirserver.log

level: debug


stacks:
- name: cflinuxfs2

package_path: /var/warden/rootfs


placement_properties:
zone: "zone"

warden test_vm.yml:

server:

container_klass: Warden::Container::Linux


# Wait this long before destroying a container, after the last client
# referencing it disconnected. The timer is cancelled when during this

# period, another client references the container.

#

# Clients can be forced to specify this setting by setting the

# server-wide variable to an invalid value:

# container_grace_time: invalid

#

# The grace time can be disabled by setting it to nil:

# container_grace_time: ~

#

container_grace_time: 300


unix_domain_permissions: 0777
unix_domain_path: /tmp/warden.sock


# Specifies the path to the base chroot used as the read-only root
# filesystem

container_rootfs_path: /var/warden/rootfs


# Specifies the path to the parent directory under which all
containers
# will live.

container_depot_path: /var/warden/containers


# See getrlimit(2) for details. Integer values are passed verbatim.
container_rlimits:

core: 0


quota:
disk_quota_enabled: false


allow_nested_warden: false

health_check_server:
port: 2345


logging:
file: /opt/cloudfoundry/logs/warden.log

level: debug2


network:
# Use this /30 network as offset for the network pool.

pool_start_address: 10.254.0.0


# Pool this many /30 networks.
pool_size: 256


# Interface MTU size
# (for OpenStack use 1454 to avoid problems with rubygems with GRE
tunneling)
mtu: 1500


user:
pool_start_uid: 11000

pool_size: 256


This is all using the latest CFv217

On Fri, Sep 11, 2015 at 1:31 PM, CF Runtime <cfruntime(a)gmail.com>
wrote:

Hey Kyle,

Can we take a look at your deployment manifest (with all the secrets
redacted)?

Zak + Dan, CF OSS Integration team

On Fri, Sep 11, 2015 at 8:55 AM, kyle havlovitz <kylehav(a)gmail.com>
wrote:

I'm getting an error pushing any app during the staging step. The cf
logs returns only this:

2015-09-11T15:24:24.33+0000 [API] OUT Created app with guid
47efa472-e2d5-400b-9135-b1a1dbe3ba88
2015-09-11T15:24:24.41+0000 [API] OUT Updated app with guid
47efa472-e2d5-400b-9135-b1a1dbe3ba88
({"route"=>"5737c5f5-b017-43da-9013-2b6fe7db03f7"})
2015-09-11T15:24:29.54+0000 [DEA/0] OUT Got staging request for
app with id 47efa472-e2d5-400b-9135-b1a1dbe3ba88
2015-09-11T15:24:30.71+0000 [API] OUT Updated app with guid
47efa472-e2d5-400b-9135-b1a1dbe3ba88 ({"state"=>"STARTED"})
2015-09-11T15:24:30.76+0000 [STG/0] OUT -----> Downloaded app
package (4.0K)
2015-09-11T15:25:06.00+0000 [API] ERR encountered error:
Staging error: failed to stage application:
2015-09-11T15:25:06.00+0000 [API] ERR Script exited with
status 1
In the warden logs, there are a few suspect messages:

{
"timestamp": 1441985105.8883495,

"message": "Exited with status 1 (35.120s):
[[\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\",
\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\"],
\"/var/warden/containers/18vf956il5v/bin/iomux-link\", \"-w\",
\"/var/warden/containers/18vf956il5v/jobs/8/cursors\",
\"/var/warden/containers/18vf956il5v/jobs/8\"]",
"log_level": "warn",

"source": "Warden::Container::Linux",

"data": {

"handle": "18vf956il5v",

"stdout": "",

"stderr": ""

},

"thread_id": 69890836968240,

"fiber_id": 69890849112480,

"process_id": 17063,

"file":
"/opt/cloudfoundry/warden/warden/lib/warden/container/spawn.rb",
"lineno": 135,

"method": "set_deferred_success"

}



{

"timestamp": 1441985105.94083,

"message": "Exited with status 23 (0.023s):
[[\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\",
\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\"], \"rsync\",
\"-e\", \"/var/warden/containers/18vf956il5v/bin/wsh --socket
/var/warden/containers/18vf956il5v/run/wshd.sock --rsh\", \"-r\", \"-p\",
\"--links\", \"vcap(a)container:/tmp/staged/staging_info.yml\",
\"/tmp/dea_ng/staging/d20150911-17093-1amg6y8\"]",
"log_level": "warn",

"source": "Warden::Container::Linux",

"data": {

"handle": "18vf956il5v",

"stdout": "",

"stderr": "rsync: link_stat \"/tmp/staged/staging_info.yml\"
failed: No such file or directory (2)\nrsync error: some files/attrs were
not transferred (see previous errors) (code 23) at main.c(1655)
[Receiver=3.1.0]\nrsync: [Receiver] write error: Broken pipe (32)\n"
},

"thread_id": 69890836968240,

"fiber_id": 69890849112480,

"process_id": 17063,

"file":
"/opt/cloudfoundry/warden/warden/lib/warden/container/spawn.rb",
"lineno": 135,

"method": "set_deferred_success"

}



{

"timestamp": 1441985106.0086887,

"message": "Killing oom-notifier process",

"log_level": "debug",

"source": "Warden::Container::Features::MemLimit::OomNotifier",

"data": {



},

"thread_id": 69890836968240,

"fiber_id": 69890848620580,

"process_id": 17063,

"file":
"/opt/cloudfoundry/warden/warden/lib/warden/container/features/mem_limit.rb",
"lineno": 51,

"method": "kill"

}



{

"timestamp": 1441985106.0095143,

"message": "Exited with status 0 (35.427s):
[[\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\",
\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\"],
\"/opt/cloudfoundry/warden/warden/src/oom/oom\",
\"/tmp/warden/cgroup/memory/instance-18vf956il5v\"]",
"log_level": "warn",

"source": "Warden::Container::Features::MemLimit::OomNotifier",

"data": {

"stdout": "",

"stderr": ""

},

"thread_id": 69890836968240,

"fiber_id": 69890849112480,

"process_id": 17063,

"file":
"/opt/cloudfoundry/warden/warden/lib/warden/container/spawn.rb",
"lineno": 135,

"method": "set_deferred_success"

}




Obviously something is misconfigured, but I'm not sure what. I don't
know why the out of memory thing is appearing as the memory being used by
the test app i've pushed is tiny (64M app with staticfile buildpack) and
the dea config has resource.memory_mb set to 8 gigs and
staging.memory_limit_mb set to 1 gigs. Is there some config I'm lacking
that's causing this to fail?


Re: How is App Instance scale down handled

Harpreet Ghai
 

Thanks Joseph and Dan.


Re: CF Release Scripts Moved

Amit Kumar Gupta
 

Hi Michal,

We will be moving documentation for bosh-lite to docs.cloudfoundry.org for
there to be a single source of truth on how to deploy CF to bosh-lite. We
will phase out most of the scripts in the bosh-lite repo specific to CF so
there is no risk of them going stale, as it's not maintainable for them to
track changes in cf-release.

Best,
Amit

On Mon, Sep 14, 2015 at 3:38 PM, Michal Kuratczyk <mkuratczyk(a)pivotal.io>
wrote:

Hi,

Bosh-lite scripts still refer to the old paths:
https://github.com/cloudfoundry/bosh-lite/blob/master/bin/provision_cf#L31

https://github.com/cloudfoundry/bosh-lite/blob/master/bin/make_manifest_spiff#L29

Therefore the basic bosh-lite deployment fails at the moment.

Best regards,

On Fri, Sep 11, 2015 at 2:49 AM, Natalie Bennett <nbennett(a)pivotal.io>
wrote:

CF Release top-level scripts have been moved. The new location is under
the `scripts/` folder.

Thanks,
CF OSS Release Integration Team


--
Michal Kuratczyk
Pivotal


Re: How is App Instance scale down handled

CF Runtime
 

The DEA will receive a request to stop instance. The DEA calls into warden telling it to stop the container. Warden will send a TERM signal once a second for 10 seconds, at which point it will send a KILL signal.

Joseph & Dan
OSS Release Integration Team


Re: User cannot do CF login when UAA is being updated

Amit Kumar Gupta
 

Hi Ricky,

My understanding is that you still need help, and the issues Jiang and
Alexander raised are different. To avoid confusion, let's keep this thread
focused on your issue.

Can you confirm that you have two UAA VMs in separate bosh jobs, separate
AZs, etc. Can you confirm that when you roll the UAAs, only one goes down
at a time? The simplest way to affect a roll is to change some trivial
property in the manifest for your UAA jobs. If you're using v215, any of
the properties referenced here will do:

https://github.com/cloudfoundry/cf-release/blob/v215/jobs/uaa/spec#L321-L335

You should confirm that only one UAA is down at a time, and comes back up
before bosh moves on to updating the other UAA.

While this roll is happening, can you just do `CF_TRACE=true cf auth
USERNAME PASSWORD` in a loop, and if you see one that fails, post the
output, along with noting the state of the bosh deploy when the error
happens.

Thanks,
Amit

On Mon, Sep 14, 2015 at 10:51 AM, Amit Gupta <agupta(a)pivotal.io> wrote:

Ricky, Jiang, Alexander, are the three of you working together? It's hard
to tell since you've got Fujitsu, Gmail, and Altoros email addresses. Are
you folks talking about the same issue with the same deployment, or three
separate issues.

Ricky, if you still need assistance with your issue, please let us know.

On Mon, Sep 14, 2015 at 10:16 AM, Lomov Alexander <
alexander.lomov(a)altoros.com> wrote:

Yes, the problem is that postgresql database is stored on NFS that is
restarted during nfs job update. I’m sure that you’ll be able to run
updates without outage with several customizations.

It is hard to tell without knowing your environment, but in common case
steps will be following:


1. Add additional instances to nfs job and customize it to make
replications (for instance use this docs for release customization [1])
2. Make your NFS job to update sequently without our jobs updates in
parallel (like it is done for postgresql [2])
3. Check your options in update section [3].


[1] https://help.ubuntu.com/community/HighlyAvailableNFS
[2]
https://github.com/cloudfoundry/cf-release/blob/master/example_manifests/minimal-aws.yml#L115-L116
[3]
https://github.com/cloudfoundry/cf-release/blob/master/example_manifests/minimal-aws.yml#L57-L62

On Sep 14, 2015, at 9:47 AM, Yitao Jiang <jiangyt.cn(a)gmail.com> wrote:

On upgrading the deployment, the uaa not working due the uaadb filesystem
hangup.Under my environment , the nfs-wal-server's ip changed which causing
uaadb,ccdb hang up. Hard reboot the uaadb, restart uaa service solve the
issue.

Hopes can help you.

On Mon, Sep 14, 2015 at 2:13 PM, Yunata, Ricky <
rickyy(a)fast.au.fujitsu.com> wrote:

Hello,



I have a question regarding UAA in Cloud Foundry. I’m currently running
Cloud Foundry on Openstack.

I have 2 availability zones and redundancy of the important VMs
including UAA.

Whenever I do an upgrade of either stemcell or CF release, user will not
be able to do CF login when when CF is updating UAA VM.

My question is, is this a normal behaviour? If I have redundant UAA VM,
shouldn’t user still be able to still login to the apps even though it’s
being updated?

I’ve done this test a few times, with different CF version and stemcells
and all of them are giving me the same result. The latest test that I’ve
done was to upgrade CF version from 212 to 215.

Has anyone experienced the same issue?



Regards,

Ricky
Disclaimer

The information in this e-mail is confidential and may contain content
that is subject to copyright and/or is commercial-in-confidence and is
intended only for the use of the above named addressee. If you are not the
intended recipient, you are hereby notified that dissemination, copying or
use of the information is strictly prohibited. If you have received this
e-mail in error, please telephone Fujitsu Australia Software Technology Pty
Ltd on + 61 2 9452 9000 or by reply e-mail to the sender and delete the
document and all copies thereof.

Whereas Fujitsu Australia Software Technology Pty Ltd would not
knowingly transmit a virus within an email communication, it is the
receiver’s responsibility to scan all communication and any files attached
for computer viruses and other defects. Fujitsu Australia Software
Technology Pty Ltd does not accept liability for any loss or damage
(whether direct, indirect, consequential or economic) however caused, and
whether by negligence or otherwise, which may result directly or indirectly
from this communication or any files attached.

If you do not wish to receive commercial and/or marketing email messages
from Fujitsu Australia Software Technology Pty Ltd, please email
unsubscribe(a)fast.au.fujitsu.com


--

Regards,

Yitao
jiangyt.github.io



Data services import/export

Guillaume Berche
 

Hi,

I'm wondering whether there are plans to normalize backup/restore or
import/export of data services (e.g. mysql, redis...). On top of built-in
durability that these services offer, this would cover use cases such as:

- snapshot/restore of data following manual app-ops error or data
corruption from the application
- need to clone/copy data services (e.g. reproduce a problem from prod
to qa, or reset to a well-known test data set...)


Following the great cf summit session from Jusha Kruck [1], we exchanged
over [2] and I drafted some specs of such import/export service targetting
low volume data services at [3]. I'd love to hear comments and suggestions.

I'd like to know if some others in the community have similar needs, and
would potentially be interested in collaborating on an opensource
implementation that Orange is starting to work on [4].

Thanks in advance,

Guillaume.

[1]
https://www.youtube.com/watch?v=mQFKr5S7Bhc&index=16&list=PLekKnRxI5BKsDMoBTG6SlwlkTlads4KUa
[2] https://github.com/krujos/data-lifecycle-service-broker/issues/3
[3]
https://docs.google.com/document/d/1Y5vwWjvaUIwHI76XU63cAS8xEOJvN69-cNoCQRqLPqU/edit
[4] https://github.com/Orange-OpenSource/service-db-dumper


Re: love to contribute!

Simon Leung <leungs@...>
 

Hi Armin,
 
I am from the CLI team. Thanks for your interest in the CLI project.
 
We think the best way for you to start is to pick out some bugs from our backlog that are not in our near future iterations. It will be a great way for you to setup the dev environment and get familiar with the project. You also should start out by picking up 'bug' instead of 'feature' story, which has a much lower chance for us to change the story's priority or acceptance requirement.
 
 
and this is a sample story which you are welcome to pick up: https://www.pivotaltracker.com/story/show/95206110
 
Our team is always available to help, feel free to reach out either through email or github issue if you have any questions.
 
Cheers,
Simon
 
 
 
 

----- Original message -----
From: Rasheed Abdul-Aziz <rabdulaziz@...>
To: "Discussions about Cloud Foundry projects and the system overall." <cf-dev@...>
Cc:
Subject: Re: [cf-dev] Re: Re: love to contribute!
Date: Sun, Sep 13, 2015 7:48 PM
 
Hi Armin! 
 
I work on the CLI team.
 
Usually people just attack what matters to them and then make PR's. You can see that we've handled a lot of PR's from non team members. One of the benefits of this, is that the user tackes something that, even if we cannot merge it, has value to them.
 
However if someone is interested in tackling something which meets our short term goals, off of our backlog of work, we need to think about how to co-ordinate that.
 
You can imagine some issues with this:
* What if the person changes their mind, for any reason, at any point
* What if the person produces something we were also working on. 
 
I'll bring this up at our daily standup tomorrow and see if we can provide you with some guidance on what to look into.
 
Thanks so much for your interest.
 
Kind Regards,
  Rasheed.
 
On Sun, Sep 13, 2015 at 10:52 AM, Armin Ranjbar <zoup@...> wrote:
Thanks for the reply!
 
i think i do, i have signed CLA that a while ago. 
i'll certainly join the doc sprint! meanwhile i prefer to start working on CLI as well, but generally, whenever help is required i'm open to offer.
 
 
---
Armin ranjbar
 
 
On Sun, Sep 13, 2015 at 7:15 PM, James Bayer <jbayer@...> wrote:
armin,
 
thanks for indicating an interest! hopefully you have a CLA on-file. you can see some of the contribution first steps here, including the CLA to fill out if you still need to do that step [1].
 
do you have a particular area of interest in the project? e.g. the cli, the api, services on the platform, documentation? we can check with the teams in the area you're the most interested in.
 
there is also a doc-sprint coming up [2] that may be fun to participate in, even if you aren't able to join in-person there are probably areas of the documentation that can be improved from a remote location. stormy peters is the contact for that.
 
[2] https://www.cloudfoundry.org/please-join-us-for-the-first-ever-cloud-foundry-doc-sprint/
 
On Sun, Sep 13, 2015 at 1:24 AM, Armin Ranjbar <zoup@...> wrote:
Hello! :)
 
i was looking around tracker to find issues (probably low hanging ones to start with) to fix, while i might have found some that i could help with, i wanted to signal you guys as well so i could take next step in right direction.
 
thanks!
---
Armin ranjbar
 
 
 
--
Thank you,
 
James Bayer

 

To unsubscribe from this group and stop receiving emails from it, send an email to cf-cli-eng+unsubscribe@....
 


DEA/Warden staging error

kyle havlovitz <kylehav@...>
 

I have the cloudfoundry components built, configured and running on one VM
(not in BOSH), and when I push an app I'm getting a generic 'FAILED
StagingError' message after '-----> Downloaded app package (460K)'.

There's nothing in the logs for the dea/warden that seems suspect other
than these 2 things:


{
"timestamp": 1441985105.8883495,

"message": "Exited with status 1 (35.120s):
[[\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\",
\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\"],
\"/var/warden/containers/18vf956il5v/bin/iomux-link\", \"-w\",
\"/var/warden/containers/18vf956il5v/jobs/8/cursors\",
\"/var/warden/containers/18vf956il5v/jobs/8\"]",
"log_level": "warn",

"source": "Warden::Container::Linux",

"data": {

"handle": "18vf956il5v",

"stdout": "",

"stderr": ""

},

"thread_id": 69890836968240,

"fiber_id": 69890849112480,

"process_id": 17063,

"file": "/opt/cloudfoundry/warden/warden/lib/warden/container/spawn.rb",

"lineno": 135,

"method": "set_deferred_success"

}



{
"timestamp": 1441985105.94083,

"message": "Exited with status 23 (0.023s):
[[\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\",
\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\"], \"rsync\",
\"-e\", \"/var/warden/containers/18vf956il5v/bin/wsh --socket
/var/warden/containers/18vf956il5v/run/wshd.sock --rsh\", \"-r\", \"-p\",
\"--links\", \"vcap(a)container:/tmp/staged/staging_info.yml\",
\"/tmp/dea_ng/staging/d20150911-17093-1amg6y8\"]",
"log_level": "warn",

"source": "Warden::Container::Linux",

"data": {

"handle": "18vf956il5v",

"stdout": "",

"stderr": "rsync: link_stat \"/tmp/staged/staging_info.yml\"
failed: No such file or directory (2)\nrsync error: some files/attrs were
not transferred (see previous errors) (code 23) at main.c(1655)
[Receiver=3.1.0]\nrsync: [Receiver] write error: Broken pipe (32)\n"
},

"thread_id": 69890836968240,

"fiber_id": 69890849112480,

"process_id": 17063,

"file": "/opt/cloudfoundry/warden/warden/lib/warden/container/spawn.rb",

"lineno": 135,

"method": "set_deferred_success"

}


And I think the second error is just during cleanup, only failing because
the staging process didn't get far enough in to create the
'staging_info.yml'. The one about iomux-link exiting with status 1 is
pretty mysterious though and I have no idea what caused it. Does anyone
know why this might be happening?


[ANN] go-buildpack v1.6.1 and php-buildpack v4.1.3 released

Mike Dalessio
 

go-buildpack v1.6.1 and php-buildpack v4.1.3 have been released!

----

go-buildpack v1.6.1 -
https://github.com/cloudfoundry/go-buildpack/releases/tag/v1.6.1

-

Adding support for Go 1.5.1 (
https://www.pivotaltracker.com/story/show/102971246)
-

Update default GOVERSION to 1.5.1 for .godir. (
https://www.pivotaltracker.com/story/show/103219562)

Packaged binaries:

| name | version | cf_stacks |
|------|---------|------------|
| go | 1.2.1 | cflinuxfs2 |
| go | 1.2.2 | cflinuxfs2 |
| go | 1.3.2 | cflinuxfs2 |
| go | 1.3.3 | cflinuxfs2 |
| go | 1.4.1 | cflinuxfs2 |
| go | 1.4.2 | cflinuxfs2 |
| go | 1.5 | cflinuxfs2 |
| go | 1.5.1 | cflinuxfs2 |

----

php-buildpack v4.1.3 -
https://github.com/cloudfoundry/php-buildpack/releases/tag/v4.1.3

-

Updating PHP binaries for redis 2.2.7 (
https://www.pivotaltracker.com/story/show/100925176)
-

Add support for PHP 5.4.45, 5.5.29, 5.6.13 (
https://www.pivotaltracker.com/story/show/102517700)
-

Remove support for PHP 4.4.43, 5.5.27, 5.6.11
-

Upgrade nginx to 1.9.4

Packaged binaries:

- php: 5.4.44, 5.4.45
- modules: amqp, apc, apcu, bz2, curl, dba, exif, fileinfo, ftp, gd,
gettext, gmp, igbinary, imagick, imap, intl, ioncube, ldap, lua,
mailparse,
mbstring, mcrypt, memcache, memcached, mongo, msgpack, mysql, mysqli,
opcache, openssl, pcntl, pdo, pdo_mysql, pdo_pgsql, pdo_sqlite, pgsql,
phalcon, phpiredis, protobuf, protocolbuffers, pspell, readline, redis,
snmp, soap, sockets, suhosin, sundown, twig, xcache, xdebug, xhprof, xsl,
yaf, zip, zlib, zookeeper
- php: 5.5.28, 5.5.29
- modules: amqp, bz2, curl, dba, exif, fileinfo, ftp, gd, gettext,
gmp, igbinary, imagick, imap, intl, ioncube, ldap, lua, mailparse,
mbstring, mcrypt, memcache, memcached, mongo, msgpack, mysql, mysqli,
opcache, openssl, pcntl, pdo, pdo_mysql, pdo_pgsql, pdo_sqlite, pgsql,
phalcon, phpiredis, protobuf, protocolbuffers, pspell, readline, redis,
snmp, soap, sockets, suhosin, sundown, twig, xcache, xdebug, xhprof, xsl,
yaf, zip, zlib
- php: 5.6.12, 5.6.13
- modules: amqp, bz2, curl, dba, exif, fileinfo, ftp, gd, gettext,
gmp, igbinary, imagick, imap, intl, ioncube, ldap, lua, mailparse,
mbstring, mcrypt, memcache, memcached, mongo, msgpack, mysql, mysqli,
opcache, openssl, pcntl, pdo, pdo_mysql, pdo_pgsql, pdo_sqlite, pgsql,
phalcon, phpiredis, protobuf, protocolbuffers, pspell, readline, redis,
snmp, soap, sockets, suhosin, sundown, twig, xcache, xdebug,
xsl, yaf, zip,
zlib
- hhvm: 3.5.0, 3.5.1, 3.6.0, 3.6.1
- composer: 1.0.0-alpha10
- httpd: 2.4.16
- newrelic: 4.23.3.111
- nginx: 1.6.3, 1.8.0, 1.9.4


Re: CF Release Scripts Moved

Michal Kuratczyk
 

Hi,

Bosh-lite scripts still refer to the old paths:
https://github.com/cloudfoundry/bosh-lite/blob/master/bin/provision_cf#L31
https://github.com/cloudfoundry/bosh-lite/blob/master/bin/make_manifest_spiff#L29

Therefore the basic bosh-lite deployment fails at the moment.

Best regards,

On Fri, Sep 11, 2015 at 2:49 AM, Natalie Bennett <nbennett(a)pivotal.io>
wrote:

CF Release top-level scripts have been moved. The new location is under
the `scripts/` folder.

Thanks,
CF OSS Release Integration Team


--
Michal Kuratczyk
Pivotal

7681 - 7700 of 9425