Date   

Re: Proposal: Decomposing cf-release and Extracting Deployment Strategies

Amit Kumar Gupta
 

This forces us to spread all clusterable nodes across 2 deploys and
certain jobs, like CC, use the job_name+index to uniquely identify a node

I believe they're planning on switching to guids for bosh job identifiers.
I saw in another thread you and Dmitriy discussed this. Any other reasons
for having unique job names we should know about?

How would you feel about the interface allowing for specifying
additional releases, jobs, and templates to be colocated on existing jobs,
along with property configuration for these things?

I don't quite follow what you are proposing here. Can you clarify?
What I mean is the tools we build for generating manifests will support
specifying inputs (probably in the form of a YAML file) that declares what
additional releases you want to add to the deployment, what additional jobs
you may want to add, what additional job templates you may want to colocate
with an existing job, and property configuration for those additional jobs
or colocated job templates. A common example is wanting to colocate some
monitoring agent on all the jobs, and providing some credential
configuration so it can pump metrics into some third party service. This
would be for things not already covered by the LAMB architecture.

Something like that would work for me as long as we were still able to
take advantage of the scripts/tooling in cf-deployment to manage the config
and templates we manage in lds-deployment.

Yes, that'd be the plan.

Cheers,
Amit


On Mon, Sep 21, 2015 at 2:41 PM, Mike Youngstrom <youngm(a)gmail.com> wrote:

Thanks for the response. See comments below:


Sensitive property management as part of manifest generation
(encrypted or acquired from an outside source)

How do you currently get these encrypted or external values into your
manifests? At manifest generation time, would you be able to generate a
stub on the fly from this source, and pass it into the manifest generation
script?
Yes, that would work fine. Just thought I'd call it out as something our
current solution does that we'd have to augment in cf-deployment.


If for some reason we are forced to fork a stock release we'd like to
be able to use that forked release we are building instead of the publicly
available one for manifest generation and release uploads, etc.

Yes, using the stock release will be the default option, but we will
support several other ways of specifying a release, including providing a
URL to a remote tarball, a path to a local release directory, a path to a
local tarball, and maybe a git URL and SHA.
Great!


The job names in each deployment must be unique across the
installation.

Why do the job names need to be unique across deployments?
This is because a single bosh cannot connect to multiple datacenters which
for us represent different availability zones. This forces us to spread
all clusterable nodes across 2 deploys and certain jobs, like CC, use the
job_name+index to uniquely identify a node [0]. Therefore if we have 2 CCs
deployed across 2 AZ we must have one job named cloud_controller_az1 and
the other named cloud_controller_az2. Does that make sense? I recognize
this is mostly the fault of a limitation in Bosh but until bosh supports
connection to multiple vsphere datacenters with a single director we will
need to account for it in our templatin.

[0]
https://github.com/cloudfoundry/cloud_controller_ng/blob/5257a8af6990e71cd1e34ae8978dfe4773b32826/bosh-templates/cloud_controller_worker_ctl.erb#L48

Occasionally we may wish to use some config from a stock release not
currently exposed in a cf-deployment template. I'd like to be sure there
is a way we can add that config, in a not hacky way, without waiting for a
PR to be accepted and subsequent release.

This would be ideal. Currently, a lot of complexity in manifest
generation is around, if you specify a certain value X, then you need to
make sure you specify values Y, Z, etc. in a compatible way. E.g. if you
have 3 etcd instances, then the value for the etcd.machines property needs
to have those 3 IPs. If you specify domain as "mydomain.com", then you
need to specify in other places that the UAA URL is "
https://uaa.mydomain.com". The hope is most of this complexity goes
away with BOSH Links (
https://github.com/cloudfoundry/bosh-notes/blob/master/links.md). My
hope is that, as the complexity goes away, we will have to maintain less
logic and will be able to comfortably expose more, if not all, of the
properties.
Great

We have our own internal bosh releases and config that we'll need to
merge in with the things cf-deployment is doing.

How would you feel about the interface allowing for specifying additional
releases, jobs, and templates to be colocated on existing jobs, along with
property configuration for these things?
I don't quite follow what you are proposing here. Can you clarify?


we'd like to augment this with our own release jobs and config that we
know to work with cf-deployment 250's and perhaps tag it as v250.lds

Would a workflow like this work for you: maintain an lds-deployment repo,
which includes cf-deployment as a submodule, and you can version
lds-deployment and update your submodule pointer to cf-deployment as you
see fit? lds-deployment will probably just need the cf-deployment
submodule, and a config file describing the "blessed" versions of the
non-stock releases you wish to add on. I know this is lacking details, but
does something along those lines sound like a reasonable workflow?
Something like that would work for me as long as we were still able to
take advantage of the scripts/tooling in cf-deployment to manage the config
and templates we manage in lds-deployment.

Thanks,
Mike




On Wed, Sep 16, 2015 at 3:06 PM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

Another situation we have that you may want to keep in mind while
developing cf-deployment:

* We are using vsphere and currently we have a cf installation with 2 AZ
using 2 separate vsphere "Datacenters" (more details:
https://github.com/cloudfoundry/bosh-notes/issues/7). This means we
have a CF installation that is actually made up of 2 deployments. So, we
need to generate a manifest for az1 and another for az2. The job names in
each deployment must be unique across the installation (e.g.
cloud_controller_az1 and cloud_controller_az2) would be the cc job names in
each deployment.

Mike

On Wed, Sep 16, 2015 at 3:38 PM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

Here are some of the examples:

* Sensitive property management as part of manifest generation
(encrypted or acquired from an outside source)

* We have our own internal bosh releases and config that we'll need to
merge in with the things cf-deployment is doing. For example, if
cf-deployment tags v250 as including Diego 3333 and etcd 34 with given
templates perhaps we'd like to augment this with our own release jobs and
config that we know to work with cf-deployment 250's and perhaps tag it as
v250.lds and that becomes what we use to generate our manifests and upload
releases.

* Occasionally we may wish to use some config from a stock release not
currently exposed in a cf-deployment template. I'd like to be sure there
is a way we can add that config, in a not hacky way, without waiting for a
PR to be accepted and subsequent release.

* If for some reason we are forced to fork a stock release we'd like to
be able to use that forked release we are building instead of the publicly
available one for manifest generation and release uploads, etc.

Does that help?

Mike



On Tue, Sep 15, 2015 at 9:50 PM, Amit Gupta <agupta(a)pivotal.io> wrote:

Thanks for the feedback Mike!

Can you tell us more specifically what sort of extensions you need?
It would be great if cf-deployment provided an interface that could serve
the needs of essentially all operators of CF.

Thanks,
Amit

On Tue, Sep 15, 2015 at 4:02 PM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

This is great stuff! My organization currently maintains our own
custom ways to generate manifests, include secure properties, and manage
release versions.

We would love to base the next generation of our solution on
cf-deployment. Have you put any thought into how others might customize or
extend cf-deployment? Our needs are very similar to yours just sometimes a
little different.

Perhaps a private fork periodically merged with a known good release
combination (tag) might be appropriate? Or some way to include the same
tools into a wholly private repo?

Mike


On Tue, Sep 8, 2015 at 1:22 PM, Amit Gupta <agupta(a)pivotal.io> wrote:

Hi all,

The CF OSS Release Integration team (casually referred to as the
"MEGA team") is trying to solve a lot of tightly interrelated problems, and
make many of said problems less interrelated. It is difficult to address
just one issue without touching the others, so the following proposal
addresses several issues, but the most important ones are:

* decompose cf-release into many independently manageable,
independently testable, independently usable releases
* separate manifest generation strategies from the release source,
paving the way for Diego to be part of the standard deployment

This proposal will outline a picture of how manifest generation will
work in a unified manner in development, test, and integration
environments. It will also outline a picture of what each release’s test
pipelines will look like, how they will feed into a common integration
environment, and how feedback from the integration environment will feed
back into the test environments. Finally, it will propose a picture for
what the integration environment will look like, and how we get from the
current integration environment to where we want to be.

For further details, please feel free to view and comment here:


https://docs.google.com/document/d/1Viga_TzUB2nLxN_ILqksmUiILM1hGhq7MBXxgLaUOkY

Thanks,
Amit, CF OSS Release Integration team


Re: cf push without a manifest file on linux does not work but works on windows

Rasheed Abdul-Aziz
 

Hi Varsha

Could you please repost this to our issue tracker:
https://github.com/cloudfoundry/cli/issues

And when you do so, could you rerun the command with CF_TRACE=true.
Scan it for anything that you feel needs to remain private and hide it with
***'s, and paste the output into the issue.

I'm pretty sure we'll be able to help!

Kind Regards,
Rasheed.

On Sun, Sep 20, 2015 at 11:58 PM, Varsha Nagraj <n.varsha(a)gmail.com> wrote:

I am trying to push a nodejs application without a mainfest file as
follows(using cloud foundry push command): cf push appname -c "node app.js"
-d "mydomain.net" -i 1 -n hostname -m 64M -p "path to directory"
--no-manifest.

This works on Windows. However if I run the same on Linux, it throws me
"incorrect" usage. Is there any difference wrt "double quotes" or what
might be the issue?


Re: User cannot do CF login when UAA is being updated

Yunata, Ricky <rickyy@...>
 

Hi Joseph, Amit & all,

Hi Joseph, have you received the attachment from Dies?
To everyone else, I just wanted to know if this is the normal behaviour of CF that user is logged out when UAA is being updated, or is it because I have my manifest wrongly configured.
It would be helpful if anyone can give me some answer based on their experience. Thanks

Regards,
Ricky

From: CF Runtime [mailto:cfruntime(a)gmail.com]
Sent: Wednesday, 16 September 2015 7:08 PM
To: Discussions about Cloud Foundry projects and the system overall.
Subject: [cf-dev] Re: Re: Re: Re: Re: Re: User cannot do CF login when UAA is being updated

If you can't get the list to accept the attachment, you can give it to Dies and he should be able to get it to us.

Joseph
OSS Release Integration Team

On Tue, Sep 15, 2015 at 7:19 PM, Yunata, Ricky <rickyy(a)fast.au.fujitsu.com<mailto:rickyy(a)fast.au.fujitsu.com>> wrote:
Hi Joseph,

Yes that is the case. I have sent my test result but it seems that my e-mail does not get through. How can I sent attachment in this mailing list?

Regards,
Ricky


From: CF Runtime [mailto:cfruntime(a)gmail.com<mailto:cfruntime(a)gmail.com>]
Sent: Tuesday, 15 September 2015 8:10 PM
To: Discussions about Cloud Foundry projects and the system overall.
Subject: [cf-dev] Re: Re: Re: Re: User cannot do CF login when UAA is being updated

Couple of updates here for clarity. No databases are stored on NFS in any default installation. NFS is only used to store blobstore data. If you are using the postgres job from cf-release, since it is single node there will be downtime during a stemcell deploy.

I talked with Dies from Fujitsu earlier and confirmed they are NOT using the postgres job but an external non-cf deployed postgres instance. So during a deploy, the UAA db should be up and available the entire time.

The issue they are seeing is that even though the database is up, and I'm guessing there is at least a single node of UAA up during the deploy, there are still login failures.

Joseph
OSS Release Integration Team

On Mon, Sep 14, 2015 at 6:39 PM, Filip Hanik <fhanik(a)pivotal.io<mailto:fhanik(a)pivotal.io>> wrote:
Amit, see previous comment.

Postgresql database is stored on NFS that is restarted during nfs job update.
UAA, while being up, is non functional while the NFS job is updated because it can't get to the DB.



On Mon, Sep 14, 2015 at 5:09 PM, Amit Gupta <agupta(a)pivotal.io<mailto:agupta(a)pivotal.io>> wrote:
Hi Ricky,

My understanding is that you still need help, and the issues Jiang and Alexander raised are different. To avoid confusion, let's keep this thread focused on your issue.

Can you confirm that you have two UAA VMs in separate bosh jobs, separate AZs, etc. Can you confirm that when you roll the UAAs, only one goes down at a time? The simplest way to affect a roll is to change some trivial property in the manifest for your UAA jobs. If you're using v215, any of the properties referenced here will do:

https://github.com/cloudfoundry/cf-release/blob/v215/jobs/uaa/spec#L321-L335

You should confirm that only one UAA is down at a time, and comes back up before bosh moves on to updating the other UAA.

While this roll is happening, can you just do `CF_TRACE=true cf auth USERNAME PASSWORD` in a loop, and if you see one that fails, post the output, along with noting the state of the bosh deploy when the error happens.

Thanks,
Amit

On Mon, Sep 14, 2015 at 10:51 AM, Amit Gupta <agupta(a)pivotal.io<mailto:agupta(a)pivotal.io>> wrote:
Ricky, Jiang, Alexander, are the three of you working together? It's hard to tell since you've got Fujitsu, Gmail, and Altoros email addresses. Are you folks talking about the same issue with the same deployment, or three separate issues.

Ricky, if you still need assistance with your issue, please let us know.

On Mon, Sep 14, 2015 at 10:16 AM, Lomov Alexander <alexander.lomov(a)altoros.com<mailto:alexander.lomov(a)altoros.com>> wrote:
Yes, the problem is that postgresql database is stored on NFS that is restarted during nfs job update. I’m sure that you’ll be able to run updates without outage with several customizations.

It is hard to tell without knowing your environment, but in common case steps will be following:


1. Add additional instances to nfs job and customize it to make replications (for instance use this docs for release customization [1])
2. Make your NFS job to update sequently without our jobs updates in parallel (like it is done for postgresql [2])
3. Check your options in update section [3].

[1] https://help.ubuntu.com/community/HighlyAvailableNFS
[2] https://github.com/cloudfoundry/cf-release/blob/master/example_manifests/minimal-aws.yml#L115-L116
[3] https://github.com/cloudfoundry/cf-release/blob/master/example_manifests/minimal-aws.yml#L57-L62

On Sep 14, 2015, at 9:47 AM, Yitao Jiang <jiangyt.cn(a)gmail.com<mailto:jiangyt.cn(a)gmail.com>> wrote:

On upgrading the deployment, the uaa not working due the uaadb filesystem hangup.Under my environment , the nfs-wal-server's ip changed which causing uaadb,ccdb hang up. Hard reboot the uaadb, restart uaa service solve the issue.

Hopes can help you.

On Mon, Sep 14, 2015 at 2:13 PM, Yunata, Ricky <rickyy(a)fast.au.fujitsu.com<mailto:rickyy(a)fast.au.fujitsu.com>> wrote:
Hello,

I have a question regarding UAA in Cloud Foundry. I’m currently running Cloud Foundry on Openstack.
I have 2 availability zones and redundancy of the important VMs including UAA.
Whenever I do an upgrade of either stemcell or CF release, user will not be able to do CF login when when CF is updating UAA VM.
My question is, is this a normal behaviour? If I have redundant UAA VM, shouldn’t user still be able to still login to the apps even though it’s being updated?
I’ve done this test a few times, with different CF version and stemcells and all of them are giving me the same result. The latest test that I’ve done was to upgrade CF version from 212 to 215.
Has anyone experienced the same issue?

Regards,
Ricky
Disclaimer

The information in this e-mail is confidential and may contain content that is subject to copyright and/or is commercial-in-confidence and is intended only for the use of the above named addressee. If you are not the intended recipient, you are hereby notified that dissemination, copying or use of the information is strictly prohibited. If you have received this e-mail in error, please telephone Fujitsu Australia Software Technology Pty Ltd on + 61 2 9452 9000<tel:%2B%2061%202%209452%209000> or by reply e-mail to the sender and delete the document and all copies thereof.


Whereas Fujitsu Australia Software Technology Pty Ltd would not knowingly transmit a virus within an email communication, it is the receiver’s responsibility to scan all communication and any files attached for computer viruses and other defects. Fujitsu Australia Software Technology Pty Ltd does not accept liability for any loss or damage (whether direct, indirect, consequential or economic) however caused, and whether by negligence or otherwise, which may result directly or indirectly from this communication or any files attached.


If you do not wish to receive commercial and/or marketing email messages from Fujitsu Australia Software Technology Pty Ltd, please email unsubscribe(a)fast.au.fujitsu.com<mailto:unsubscribe(a)fast.au.fujitsu.com>




--

Regards,

Yitao
jiangyt.github.io<http://jiangyt.github.io/>





Disclaimer

The information in this e-mail is confidential and may contain content that is subject to copyright and/or is commercial-in-confidence and is intended only for the use of the above named addressee. If you are not the intended recipient, you are hereby notified that dissemination, copying or use of the information is strictly prohibited. If you have received this e-mail in error, please telephone Fujitsu Australia Software Technology Pty Ltd on + 61 2 9452 9000<tel:%2B%2061%202%209452%209000> or by reply e-mail to the sender and delete the document and all copies thereof.


Whereas Fujitsu Australia Software Technology Pty Ltd would not knowingly transmit a virus within an email communication, it is the receiver’s responsibility to scan all communication and any files attached for computer viruses and other defects. Fujitsu Australia Software Technology Pty Ltd does not accept liability for any loss or damage (whether direct, indirect, consequential or economic) however caused, and whether by negligence or otherwise, which may result directly or indirectly from this communication or any files attached.


If you do not wish to receive commercial and/or marketing email messages from Fujitsu Australia Software Technology Pty Ltd, please email unsubscribe(a)fast.au.fujitsu.com<mailto:unsubscribe(a)fast.au.fujitsu.com>

Disclaimer

The information in this e-mail is confidential and may contain content that is subject to copyright and/or is commercial-in-confidence and is intended only for the use of the above named addressee. If you are not the intended recipient, you are hereby notified that dissemination, copying or use of the information is strictly prohibited. If you have received this e-mail in error, please telephone Fujitsu Australia Software Technology Pty Ltd on + 61 2 9452 9000 or by reply e-mail to the sender and delete the document and all copies thereof.


Whereas Fujitsu Australia Software Technology Pty Ltd would not knowingly transmit a virus within an email communication, it is the receiver’s responsibility to scan all communication and any files attached for computer viruses and other defects. Fujitsu Australia Software Technology Pty Ltd does not accept liability for any loss or damage (whether direct, indirect, consequential or economic) however caused, and whether by negligence or otherwise, which may result directly or indirectly from this communication or any files attached.


If you do not wish to receive commercial and/or marketing email messages from Fujitsu Australia Software Technology Pty Ltd, please email unsubscribe(a)fast.au.fujitsu.com


[ann] Subway - how to scale out any Cloud Foundry service

Dr Nic Williams
 

Quicky links:

* https://github.com/cloudfoundry-community/cf-subway
*
https://blog.starkandwayne.com/2015/09/21/how-to-scale-out-any-cloud-foundry-service/

We've been using Ferdy's Docker BOSH release since he created it, and have
published new docker images, new wrapper BOSH releases and more. But it
still doesn't scale horizontally (yes it has docker swarm support but no
that can't do persistent storage on volumes).

So we created Subway - a broker that allows you to run a fleet of
single-server service brokers such as Docker BOSH release, or
cf-redis-boshrelease.

I'll write up/create a video soon to walk-thru upgrading your existing
in-production single-server services to use Subway.

Have fun!

Nic


--
Dr Nic Williams
Stark & Wayne LLC - consultancy for Cloud Foundry users
http://drnicwilliams.com
http://starkandwayne.com
cell +1 (415) 860-2185
twitter @drnic


Re: Packaging CF app as bosh-release

Amit Kumar Gupta
 

Hey Kayode,

Were you able to make any progress with the deployments you were trying to
do?

Best,
Amit

On Wed, Sep 16, 2015 at 12:48 PM, Amit Gupta <agupta(a)pivotal.io> wrote:

My very limited understanding is that NFS writes to the actual filesystem,
and achieves persistence by having centralized NFS servers where it writes
to a real mounted device, whereas the clients write to an ephemeral
nfs-mount.

My very limited understanding of HDFS is that it's all userland FS, does
not write to the actual filesystem, and relies on replication to other
nodes in the HDFS cluster. Being a userland FS, you don't have to worry
about the data being wiped when a container is shut down, if you were to
run it as an app.

I think one main issue is going to be ensuring that you never lose too
many instances (whether they are containers or VMs), since you might then
lose all replicas of a given data shard. Whether you go with apps or BOSH
VMs doesn't make a big difference here.

Deploying as an app may be a better way to go, it's simpler right now to
configure and deploy and app, than to configure and deploy a full BOSH
release. It's also likely to be a more efficient use of resources, since a
BOSH VM can only run one of these spark-job-processors, but a CF
container-runner can run lots of other things. That actually brings up a
different question: is your compute environment a multi-tenant one that
will be running multiple different workloads? E.g. could someone also use
the CF to push their own apps? Or is the whole thing just for your spark
jobs, in which case you might only be running one container per VM anyways?

Assuming you can make use of the VMs for other workloads, I think this
would be an ideal use case for Diego. You probably don't need all the
extra logic around apps, like staging and routing, you just need Diego to
efficiently schedule containers for you.

On Wed, Sep 16, 2015 at 1:13 PM, Kayode Odeyemi <dreyemi(a)gmail.com> wrote:

Thanks Dmitriy,

Just for clarity, are you saying multiple instances of a VM cannot share
a single shared filesystem?

On Wed, Sep 16, 2015 at 6:59 PM, Dmitriy Kalinin <dkalinin(a)pivotal.io>
wrote:

BOSH allocates a persistent disk per instance. It never shares
persistent disks between multiple instances at the same time.

If you need a shared file system, you will have to use some kind of a
release for it. It's not any different from what people do with nfs
server/client.

On Wed, Sep 16, 2015 at 7:09 AM, Amit Gupta <agupta(a)pivotal.io> wrote:

The shared file system aspect is an interesting wrinkle to the
problem. Unless you use some network layer to how you write to the shared
file system, e.g. SSHFS, I think apps will not work because they get
isolated to run in a container, they're given a chroot "jail" for their
file system, and it gets blown away whenever the app is stopped or
restarted (which will commonly happen, e.g. during a rolling deploy of the
container-runner VMs).

Do you have something that currently works? How do your VMs currently
access this shared FS? I'm not sure BOSH has the abstractions for choosing
a shared, already-existing "persistent disk" to be attached to multiple
VMs. I also don't know what happens when you scale your VMs down, because
BOSH would generally destroy the associated persistent disk, but you don't
want to destroy the shared data.

Dmitriy, any idea how BOSH can work with a shared filesystem (e.g.
HDFS)?

Amit

On Wed, Sep 16, 2015 at 6:54 AM, Kayode Odeyemi <dreyemi(a)gmail.com>
wrote:


On Wed, Sep 16, 2015 at 3:44 PM, Amit Gupta <agupta(a)pivotal.io> wrote:

Are the spark jobs tasks that you expect to end, or apps that you
expect to run forever?
They are tasks that run forever. The jobs are subscribers to RabbitMQ
queues that process
messages in batches.


Do your jobs need to write to the file system, or do they access a
shared/distributed file system somehow?
The jobs write to shared filesystem.


Do you need things like a static IP allocated to your jobs?
No.


Are your spark jobs serving any web traffic?
No.




Re: Proposal: Decomposing cf-release and Extracting Deployment Strategies

Mike Youngstrom
 

Thanks for the response. See comments below:


Sensitive property management as part of manifest generation (encrypted
or acquired from an outside source)

How do you currently get these encrypted or external values into your
manifests? At manifest generation time, would you be able to generate a
stub on the fly from this source, and pass it into the manifest generation
script?
Yes, that would work fine. Just thought I'd call it out as something our
current solution does that we'd have to augment in cf-deployment.


If for some reason we are forced to fork a stock release we'd like to
be able to use that forked release we are building instead of the publicly
available one for manifest generation and release uploads, etc.

Yes, using the stock release will be the default option, but we will
support several other ways of specifying a release, including providing a
URL to a remote tarball, a path to a local release directory, a path to a
local tarball, and maybe a git URL and SHA.
Great!


The job names in each deployment must be unique across the
installation.

Why do the job names need to be unique across deployments?
This is because a single bosh cannot connect to multiple datacenters which
for us represent different availability zones. This forces us to spread
all clusterable nodes across 2 deploys and certain jobs, like CC, use the
job_name+index to uniquely identify a node [0]. Therefore if we have 2 CCs
deployed across 2 AZ we must have one job named cloud_controller_az1 and
the other named cloud_controller_az2. Does that make sense? I recognize
this is mostly the fault of a limitation in Bosh but until bosh supports
connection to multiple vsphere datacenters with a single director we will
need to account for it in our templatin.

[0]
https://github.com/cloudfoundry/cloud_controller_ng/blob/5257a8af6990e71cd1e34ae8978dfe4773b32826/bosh-templates/cloud_controller_worker_ctl.erb#L48

Occasionally we may wish to use some config from a stock release not
currently exposed in a cf-deployment template. I'd like to be sure there
is a way we can add that config, in a not hacky way, without waiting for a
PR to be accepted and subsequent release.

This would be ideal. Currently, a lot of complexity in manifest
generation is around, if you specify a certain value X, then you need to
make sure you specify values Y, Z, etc. in a compatible way. E.g. if you
have 3 etcd instances, then the value for the etcd.machines property needs
to have those 3 IPs. If you specify domain as "mydomain.com", then you
need to specify in other places that the UAA URL is "
https://uaa.mydomain.com". The hope is most of this complexity goes away
with BOSH Links (
https://github.com/cloudfoundry/bosh-notes/blob/master/links.md). My
hope is that, as the complexity goes away, we will have to maintain less
logic and will be able to comfortably expose more, if not all, of the
properties.
Great

We have our own internal bosh releases and config that we'll need to
merge in with the things cf-deployment is doing.

How would you feel about the interface allowing for specifying additional
releases, jobs, and templates to be colocated on existing jobs, along with
property configuration for these things?
I don't quite follow what you are proposing here. Can you clarify?


we'd like to augment this with our own release jobs and config that we
know to work with cf-deployment 250's and perhaps tag it as v250.lds

Would a workflow like this work for you: maintain an lds-deployment repo,
which includes cf-deployment as a submodule, and you can version
lds-deployment and update your submodule pointer to cf-deployment as you
see fit? lds-deployment will probably just need the cf-deployment
submodule, and a config file describing the "blessed" versions of the
non-stock releases you wish to add on. I know this is lacking details, but
does something along those lines sound like a reasonable workflow?
Something like that would work for me as long as we were still able to take
advantage of the scripts/tooling in cf-deployment to manage the config and
templates we manage in lds-deployment.

Thanks,
Mike




On Wed, Sep 16, 2015 at 3:06 PM, Mike Youngstrom <youngm(a)gmail.com> wrote:

Another situation we have that you may want to keep in mind while
developing cf-deployment:

* We are using vsphere and currently we have a cf installation with 2 AZ
using 2 separate vsphere "Datacenters" (more details:
https://github.com/cloudfoundry/bosh-notes/issues/7). This means we
have a CF installation that is actually made up of 2 deployments. So, we
need to generate a manifest for az1 and another for az2. The job names in
each deployment must be unique across the installation (e.g.
cloud_controller_az1 and cloud_controller_az2) would be the cc job names in
each deployment.

Mike

On Wed, Sep 16, 2015 at 3:38 PM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

Here are some of the examples:

* Sensitive property management as part of manifest generation
(encrypted or acquired from an outside source)

* We have our own internal bosh releases and config that we'll need to
merge in with the things cf-deployment is doing. For example, if
cf-deployment tags v250 as including Diego 3333 and etcd 34 with given
templates perhaps we'd like to augment this with our own release jobs and
config that we know to work with cf-deployment 250's and perhaps tag it as
v250.lds and that becomes what we use to generate our manifests and upload
releases.

* Occasionally we may wish to use some config from a stock release not
currently exposed in a cf-deployment template. I'd like to be sure there
is a way we can add that config, in a not hacky way, without waiting for a
PR to be accepted and subsequent release.

* If for some reason we are forced to fork a stock release we'd like to
be able to use that forked release we are building instead of the publicly
available one for manifest generation and release uploads, etc.

Does that help?

Mike



On Tue, Sep 15, 2015 at 9:50 PM, Amit Gupta <agupta(a)pivotal.io> wrote:

Thanks for the feedback Mike!

Can you tell us more specifically what sort of extensions you need? It
would be great if cf-deployment provided an interface that could serve the
needs of essentially all operators of CF.

Thanks,
Amit

On Tue, Sep 15, 2015 at 4:02 PM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

This is great stuff! My organization currently maintains our own
custom ways to generate manifests, include secure properties, and manage
release versions.

We would love to base the next generation of our solution on
cf-deployment. Have you put any thought into how others might customize or
extend cf-deployment? Our needs are very similar to yours just sometimes a
little different.

Perhaps a private fork periodically merged with a known good release
combination (tag) might be appropriate? Or some way to include the same
tools into a wholly private repo?

Mike


On Tue, Sep 8, 2015 at 1:22 PM, Amit Gupta <agupta(a)pivotal.io> wrote:

Hi all,

The CF OSS Release Integration team (casually referred to as the
"MEGA team") is trying to solve a lot of tightly interrelated problems, and
make many of said problems less interrelated. It is difficult to address
just one issue without touching the others, so the following proposal
addresses several issues, but the most important ones are:

* decompose cf-release into many independently manageable,
independently testable, independently usable releases
* separate manifest generation strategies from the release source,
paving the way for Diego to be part of the standard deployment

This proposal will outline a picture of how manifest generation will
work in a unified manner in development, test, and integration
environments. It will also outline a picture of what each release’s test
pipelines will look like, how they will feed into a common integration
environment, and how feedback from the integration environment will feed
back into the test environments. Finally, it will propose a picture for
what the integration environment will look like, and how we get from the
current integration environment to where we want to be.

For further details, please feel free to view and comment here:


https://docs.google.com/document/d/1Viga_TzUB2nLxN_ILqksmUiILM1hGhq7MBXxgLaUOkY

Thanks,
Amit, CF OSS Release Integration team


Re: CAB September Call on 9/9/2015 @ 8a PDT

Whelan, Phil <phillip.whelan@...>
 

Hi,

Unfortunately, I haven’t found time to write up the CAB call notes blog post this month.

In case you missed it, I’ve posted a recording of the call here
https://www.dropbox.com/s/t8xewz5vw708b5q/cab_9th_sept_2015.mp3?dl=0

Thanks,
Phil

From: Amit Gupta [mailto:agupta(a)pivotal.io]
Sent: Tuesday, September 08, 2015 9:58 PM
To: Discussions about Cloud Foundry projects and the system overall.
Cc: James Bayer; Chip Childers
Subject: [cf-dev] Re: Re: CAB September Call on 9/9/2015 @ 8a PDT

Hi all,

I will not be able to attend the CAB meeting tomorrow, but I have added my notes to the agenda doc. MEGA has been/will be working on a bunch of exciting things, and I welcome questions/comments via email, either through the cf-dev mailing list or directly.

Best,
Amit, CF Release Integration team (MEGA) PM

On Tue, Sep 8, 2015 at 8:19 AM, Michael Maximilien <maxim(a)us.ibm.com<mailto:maxim(a)us.ibm.com>> wrote:
Final reminder for the CAB call tomorrow. See you at Pivotal SF and talk to you all then.

Best,
dr.max
ibm cloud labs
silicon valley, ca

Sent from my iPhone

On Sep 2, 2015, at 6:04 PM, Michael Maximilien <maxim(a)us.ibm.com<mailto:maxim(a)us.ibm.com>> wrote:
Hi, all,

Quick reminder that the CAB call for September is next week Wednesday September 9th @ 8a PDT.

Please add any project updates to Agenda here: https://docs.google.com/document/d/1SCOlAquyUmNM-AQnekCOXiwhLs6gveTxAcduvDcW_xI/edit#heading=h.o44xhgvum2we

If you have something else to share, please also add an entry at the end.

Best,

Chip, James, and Max

PS: Dr.Nic this is one week in advance, so no excuses ;) phone info listed on agenda.
PPS: Have a great labor day weekend---if you are in the US.


Re: Recommended BOSH and Stemcell version for CF installation

Amit Kumar Gupta
 

Hi René,

From the release notes:

These are soft recommendations; several different versions of the BOSH
release and stemcell are likely to work fine.

They are just for guidance, they represent the versions we've actually
certified against because we can't deploy every single combination of
versions.

Best,
Amit

On Mon, Sep 21, 2015 at 1:09 PM, René Welches <rennis3000(a)googlemail.com>
wrote:

Hi everyone,
we were wondering what's the best practices around CF, BOSH and Stemcell
version is.
The CF release notes always indicate a recommended BOSH and Stemcell
version for a dedicated CF version.
Due to the setup of our deployment pipeline our Stemcell and BOSH versions
are higher/newer (latest) than the one recommended for our CF version
(v214).
Is this something we definitely should avoidor is it actually recommended?
Best
René


Recommended BOSH and Stemcell version for CF installation

René Welches <rennis3000 at googlemail.com...>
 

Hi everyone,
we were wondering what's the best practices around CF, BOSH and Stemcell
version is.
The CF release notes always indicate a recommended BOSH and Stemcell
version for a dedicated CF version.
Due to the setup of our deployment pipeline our Stemcell and BOSH versions
are higher/newer (latest) than the one recommended for our CF version
(v214).
Is this something we definitely should avoidor is it actually recommended?
Best
René


Re: Proposal: Decomposing cf-release and Extracting Deployment Strategies

Amit Kumar Gupta
 

Thanks Mike, this is great feedback!

Sensitive property management as part of manifest generation (encrypted
or acquired from an outside source)

How do you currently get these encrypted or external values into your
manifests? At manifest generation time, would you be able to generate a
stub on the fly from this source, and pass it into the manifest generation
script?

If for some reason we are forced to fork a stock release we'd like to be
able to use that forked release we are building instead of the publicly
available one for manifest generation and release uploads, etc.

Yes, using the stock release will be the default option, but we will
support several other ways of specifying a release, including providing a
URL to a remote tarball, a path to a local release directory, a path to a
local tarball, and maybe a git URL and SHA.

The job names in each deployment must be unique across the installation.
Why do the job names need to be unique across deployments?

Occasionally we may wish to use some config from a stock release not
currently exposed in a cf-deployment template. I'd like to be sure there
is a way we can add that config, in a not hacky way, without waiting for a
PR to be accepted and subsequent release.

This would be ideal. Currently, a lot of complexity in manifest generation
is around, if you specify a certain value X, then you need to make sure you
specify values Y, Z, etc. in a compatible way. E.g. if you have 3 etcd
instances, then the value for the etcd.machines property needs to have
those 3 IPs. If you specify domain as "mydomain.com", then you need to
specify in other places that the UAA URL is "https://uaa.mydomain.com".
The hope is most of this complexity goes away with BOSH Links (
https://github.com/cloudfoundry/bosh-notes/blob/master/links.md). My hope
is that, as the complexity goes away, we will have to maintain less logic
and will be able to comfortably expose more, if not all, of the properties.

We have our own internal bosh releases and config that we'll need to
merge in with the things cf-deployment is doing.

How would you feel about the interface allowing for specifying additional
releases, jobs, and templates to be colocated on existing jobs, along with
property configuration for these things?

we'd like to augment this with our own release jobs and config that we
know to work with cf-deployment 250's and perhaps tag it as v250.lds

Would a workflow like this work for you: maintain an lds-deployment repo,
which includes cf-deployment as a submodule, and you can version
lds-deployment and update your submodule pointer to cf-deployment as you
see fit? lds-deployment will probably just need the cf-deployment
submodule, and a config file describing the "blessed" versions of the
non-stock releases you wish to add on. I know this is lacking details, but
does something along those lines sound like a reasonable workflow?

On Wed, Sep 16, 2015 at 3:06 PM, Mike Youngstrom <youngm(a)gmail.com> wrote:

Another situation we have that you may want to keep in mind while
developing cf-deployment:

* We are using vsphere and currently we have a cf installation with 2 AZ
using 2 separate vsphere "Datacenters" (more details:
https://github.com/cloudfoundry/bosh-notes/issues/7). This means we have
a CF installation that is actually made up of 2 deployments. So, we need
to generate a manifest for az1 and another for az2. The job names in each
deployment must be unique across the installation (e.g.
cloud_controller_az1 and cloud_controller_az2) would be the cc job names in
each deployment.

Mike

On Wed, Sep 16, 2015 at 3:38 PM, Mike Youngstrom <youngm(a)gmail.com> wrote:

Here are some of the examples:

* Sensitive property management as part of manifest generation (encrypted
or acquired from an outside source)

* We have our own internal bosh releases and config that we'll need to
merge in with the things cf-deployment is doing. For example, if
cf-deployment tags v250 as including Diego 3333 and etcd 34 with given
templates perhaps we'd like to augment this with our own release jobs and
config that we know to work with cf-deployment 250's and perhaps tag it as
v250.lds and that becomes what we use to generate our manifests and upload
releases.

* Occasionally we may wish to use some config from a stock release not
currently exposed in a cf-deployment template. I'd like to be sure there
is a way we can add that config, in a not hacky way, without waiting for a
PR to be accepted and subsequent release.

* If for some reason we are forced to fork a stock release we'd like to
be able to use that forked release we are building instead of the publicly
available one for manifest generation and release uploads, etc.

Does that help?

Mike



On Tue, Sep 15, 2015 at 9:50 PM, Amit Gupta <agupta(a)pivotal.io> wrote:

Thanks for the feedback Mike!

Can you tell us more specifically what sort of extensions you need? It
would be great if cf-deployment provided an interface that could serve the
needs of essentially all operators of CF.

Thanks,
Amit

On Tue, Sep 15, 2015 at 4:02 PM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

This is great stuff! My organization currently maintains our own
custom ways to generate manifests, include secure properties, and manage
release versions.

We would love to base the next generation of our solution on
cf-deployment. Have you put any thought into how others might customize or
extend cf-deployment? Our needs are very similar to yours just sometimes a
little different.

Perhaps a private fork periodically merged with a known good release
combination (tag) might be appropriate? Or some way to include the same
tools into a wholly private repo?

Mike


On Tue, Sep 8, 2015 at 1:22 PM, Amit Gupta <agupta(a)pivotal.io> wrote:

Hi all,

The CF OSS Release Integration team (casually referred to as the "MEGA
team") is trying to solve a lot of tightly interrelated problems, and make
many of said problems less interrelated. It is difficult to address just
one issue without touching the others, so the following proposal addresses
several issues, but the most important ones are:

* decompose cf-release into many independently manageable,
independently testable, independently usable releases
* separate manifest generation strategies from the release source,
paving the way for Diego to be part of the standard deployment

This proposal will outline a picture of how manifest generation will
work in a unified manner in development, test, and integration
environments. It will also outline a picture of what each release’s test
pipelines will look like, how they will feed into a common integration
environment, and how feedback from the integration environment will feed
back into the test environments. Finally, it will propose a picture for
what the integration environment will look like, and how we get from the
current integration environment to where we want to be.

For further details, please feel free to view and comment here:


https://docs.google.com/document/d/1Viga_TzUB2nLxN_ILqksmUiILM1hGhq7MBXxgLaUOkY

Thanks,
Amit, CF OSS Release Integration team


Re: Failing to deploy Diego 0.1398.0 with CF214 using cf-boshworkspace 1.1.15 with cf-aws-tiny.yml and diego-aws.yml

Dmitri Sarytchev
 

The issue had been resolved...underlying issue was that none of the consul agents wanted to behave as servers due to a 'lan: []' entry under backbone_z1 machine under cf deployment.


Adding new events table index requires truncation

Jeffrey Pak
 

Hi all,

The CAPI team is looking to merge in a PR to cloud_controller_ng, https://github.com/cloudfoundry/cloud_controller_ng/pull/418, which will update an index on the events table to include "id" as well as "timestamp". See https://www.pivotaltracker.com/story/show/101985370 for more information and discussion.

Older deployments with many events would experience a very slow deploy if this migration runs as-is. To prevent this from causing failed deploys or unintended downtime, we'd like to truncate the events table as part of the migration.

If we do this, it'll be made clear in the release notes and will most likely be included in v219.

Any questions or concerns?

Thanks,

Raina and Jeff
CF CAPI Team


Re: Throttling App Logging

Rohit Kumar
 

It isn't possible to throttle logging output on a per application basis. It
is possible to configure the message_drain_buffer_size [1] to be lower than
the default value of 100, which will reduce the number of logs which
loggregator will buffer. If the producer is filling up logs too quickly,
loggregator will drop the messages present in its buffer. This
configuration will affect ALL the applications running on your Cloud
Foundry environment. You could play with that property and see if it helps.

Rohit

[1]:
https://github.com/cloudfoundry/loggregator/blob/develop/bosh/jobs/doppler/spec#L60-L62

On Mon, Sep 21, 2015 at 2:57 AM, Daniel Jones <
daniel.jones(a)engineerbetter.com> wrote:


Is it possible with the current logging infrastructure in CF to limit the
logging throughput of particular apps or spaces?

Current client is running CF multi-tenant, and has some particularly noisy
customers. It'd be nice to be able to put a hard limit on how much they can
pass through to downstream commercial log indexers.

Any suggestions most gratefully received!

Regards,

Daniel Jones
EngineerBetter.com


Re: Throttling App Logging

Aleksey Zalesov
 

Hi!

Today CF quota can be set on three things:

1. Memory
2. Services number
3. Routes number

You can’t limit number of logging messages.

But I think its a good idea for feature request! Excessive debug logging can overwhelm log management system.

Aleksey Zalesov | CloudFoundry Engineer | Altoros
Tel: (617) 841-2121 ext. 5707 | Toll free: 855-ALTOROS
Fax: (866) 201-3646 | Skype: aleksey_zalesov
www.altoros.com <http://www.altoros.com/> | blog.altoros.com <http://blog.altoros.com/> | twitter.com/altoros <http://twitter.com/altoros>

On 21 Sep 2015, at 11:57, Daniel Jones <daniel.jones(a)engineerbetter.com> wrote:


Is it possible with the current logging infrastructure in CF to limit the logging throughput of particular apps or spaces?

Current client is running CF multi-tenant, and has some particularly noisy customers. It'd be nice to be able to put a hard limit on how much they can pass through to downstream commercial log indexers.

Any suggestions most gratefully received!

Regards,

Daniel Jones
EngineerBetter.com


Re: cf push a node js app without a manifest file

Jesse T. Alford
 

I suspect it might be forward VS backwards slashes in the file path that's
tripping things up, there. Does using / in the Linux file path help?

On Sun, Sep 20, 2015, 7:55 PM Varsha Nagraj <n.varsha(a)gmail.com> wrote:

I needed to have a testcase for creating multiple threads pushing the app.
Since I had different host names, was not sure if I could also have the
same app name. Thank you for the reply.

For q1) I have an issue, so the following command worked on windows: cf
push myApp1 -c "node app.js" -d "myDomain.net" -i 1 -n dummyhost -m 64M -p
"\path\to\my\application" --no-manifest"

But when I run the same command on linux, I again have an issue of "Failed
incorrect usage" Is there a issue when we use double quotes?


Throttling App Logging

Daniel Jones
 

Is it possible with the current logging infrastructure in CF to limit the
logging throughput of particular apps or spaces?

Current client is running CF multi-tenant, and has some particularly noisy
customers. It'd be nice to be able to put a hard limit on how much they can
pass through to downstream commercial log indexers.

Any suggestions most gratefully received!

Regards,

Daniel Jones
EngineerBetter.com


回复:Re: 回复:Re: Re: Starting failure with 504

Yancey
 

OK,Thx Amit!



 原始邮件 
发件人: Amit Gupta<agupta@...>
收件人: Discussions about Cloud Foundry projects and the system overall.<cf-dev@...>
发送时间: 2015年9月21日(周一) 15:41
主题: [cf-dev] Re: 回复:Re: Re: Starting failure with 504

Hi,

If you'd like the core development team to know this information when helping you with this problem, I'd recommend posting this and all future information on the Github issue you opened (please correct me if it's not you who opened that issue).  It's difficult to respond to an issue that's being discussed independently in two places.

Thanks,
Amit

On Mon, Sep 21, 2015 at 12:17 AM, yancey0623 <yancey0623@...> wrote:

Hi!

I tried reinstall the cf with:

1) /var/vcap/bosh/bin/monit stop all

2) kill monit process

3) rm -rf /var/vcap

4) cd {cf_nise_installer} && ./script/install.sh


but the error doesn’t fixed…


 原始邮件 
发件人: yancey0623<yancey0623@...>
收件人: Discussions about Cloud Foundry projects and the system overall.<cf-dev@...>
发送时间: 2015年9月21日(周一) 09:36
主题: 回复:[cf-dev] Re: Re: Starting failure with 504

Thanks Amit!

I can upgrade my host OS later. but i can push my app before.



 原始邮件 
发件人: Amit Gupta<agupta@...>
收件人: Discussions about Cloud Foundry projects and the system overall.<cf-dev@...>
发送时间: 2015年9月21日(周一) 02:55
主题: [cf-dev] Re: Re: Starting failure with 504

Hi there,

I noticed you opened an issue on Github with the same problem.  I'll ask the core developer team to help with your issue, they will respond on Github.

Best,
Amit

On Sun, Sep 20, 2015 at 11:37 AM, Aleksey Zalesov <aleksey.zalesov@...> wrote:
Hello!

Ubuntu 14.04 is required for cf_nise_installer [1]. Can you upgrade your host OS and see if issue persists? 


Aleksey Zalesov | CloudFoundry Engineer | Altoros
Tel: (617) 841-2121 ext. 5707 | Toll free: 855-ALTOROS
Fax: (866) 201-3646 | Skype: aleksey_zalesov


On 20 Sep 2015, at 14:08, yancey0623 <yancey0623@...> wrote:

Hi!
I push my demo app with command: cf push xxx 

cf_release version:212

os:ubuntu12.04

deploy with cf_nise_installer

here is the failure message:

Updating app hello-python in org DevBox / space bre as admin...
OK

Uploading hello-python...
Uploading app files from: /home/cf/xu.yan/hello-python
Uploading 2.7K, 7 files
Done uploading               
OK

Stopping app hello-python in org DevBox / space bre as admin...
OK

Starting app hello-python in org DevBox / space bre as admin...
FAILED
Server error, status code: 504, error code: 0, message:

in de_next.log


{"timestamp":1442731880.415194,"message":"staging.task.failed","log_level":"info","source":"Staging","data":{"app_guid":"175d577d-46c4-4311-899f-eacdae64d164","task_id":"02f394d2b5b24d56a8bcdf4ff0618c9e","error":"command exited with failure","backtrace":["/var/vcap/packages/dea_next/vendor/cache/warden-ad18bff7dc56/em-warden-client/lib/em/warden/client/connection.rb:27:in `get'","/var/vcap/packages/dea_next/vendor/cache/warden-ad18bff7dc56/em-warden-client/lib/em/warden/client.rb:46:in `call'","/var/vcap/packages/dea_next/lib/container/container.rb:192:in `call'","/var/vcap/packages/dea_next/lib/container/container.rb:153:in `block in new_container_with_bind_mounts_and_rootfs'","/var/vcap/packages/dea_next/lib/container/container.rb:229:in `call'","/var/vcap/packages/dea_next/lib/container/container.rb:229:in `with_em'","/var/vcap/packages/dea_next/lib/container/container.rb:137:in `new_container_with_bind_mounts_and_rootfs'","/var/vcap/packages/dea_next/lib/container/container.rb:120:in `block in create_container'","/var/vcap/packages/dea_next/lib/container/container.rb:229:in `call'","/var/vcap/packages/dea_next/lib/container/container.rb:229:in `with_em'","/var/vcap/packages/dea_next/lib/container/container.rb:119:in `create_container'","/var/vcap/packages/dea_next/lib/dea/staging/staging_task.rb:543:in `resolve_staging_setup'","/var/vcap/packages/dea_next/lib/dea/staging/staging_task.rb:49:in `block in start'","/var/vcap/packages/dea_next/lib/dea/promise.rb:92:in `call'","/var/vcap/packages/dea_next/lib/dea/promise.rb:92:in `block in run'"]},"thread_id":69965282702100,"fiber_id":69965294576760,"process_id":44238,"file":"/var/vcap/packages/dea_next/lib/dea/staging/staging_task.rb","lineno":57,"method":"block in start"}
{"timestamp":1442731880.4168656,"message":"task.destroy.invalid","log_level":"error","source":"Staging","data":{"app_guid":"175d577d-46c4-4311-899f-eacdae64d164","task_id":"02f394d2b5b24d56a8bcdf4ff0618c9e"},"thread_id":69965282702100,"fiber_id":69965294368660,"process_id":44238,"file":"/var/vcap/packages/dea_next/lib/dea/task.rb","lineno":66,"method":"block in promise_destroy"}
root@bjlg-80p50-cf-02:/var/vcap/sys/log/dea_next#

in warden.log

{"timestamp":1442731879.5131524,"message":"Container created","log_level":"debug","source":"Warden::Container::Linux","data":{"handle":"18vtqk14egk"},"thread_id":69932978885420,"fiber_id":69932978855160,"process_id":28688,"file":"/var/vcap/data/packages/warden/724f030f2d6e90d02c2afbe90ed5fe1ce2de1667/warden/lib/warden/container/linux.rb","lineno":102,"method":"do_create"}
{"timestamp":1442731880.101295,"message":"Exited with status 1 (0.587s): [[\"/var/vcap/data/packages/warden/724f030f2d6e90d02c2afbe90ed5fe1ce2de1667/warden/src/closefds/closefds\", \"/var/vcap/data/packages/warden/724f030f2d6e90d02c2afbe90ed5fe1ce2de1667/warden/src/closefds/closefds\"], \"/var/vcap/data/warden/depot/18vtqk14egk/start.sh\"]","log_level":"warn","source":"Warden::Container::Linux","data":{"handle":"18vtqk14egk","stdout":"","stderr":""},"thread_id":69932978885420,"fiber_id":69932991380440,"process_id":28688,"file":"/var/vcap/data/packages/warden/724f030f2d6e90d02c2afbe90ed5fe1ce2de1667/warden/lib/warden/container/spawn.rb","lineno":135,"method":"set_deferred_success"}
{"timestamp":1442731880.104105,"message":"Exited with status 255 (0.001s): [[\"/var/vcap/data/packages/warden/724f030f2d6e90d02c2afbe90ed5fe1ce2de1667/warden/src/closefds/closefds\", \"/var/vcap/data/packages/warden/724f030f2d6e90d02c2afbe90ed5fe1ce2de1667/warden/src/closefds/closefds\"], \"/var/vcap/data/packages/warden/724f030f2d6e90d02c2afbe90ed5fe1ce2de1667/warden/root/linux/stop.sh\", \"/var/vcap/data/warden/depot/18vtqk14egk\", \"-w\", \"0\"]","log_level":"warn","source":"Warden::Container::Linux","data":{"handle":"18vtqk14egk","stdout":"","stderr":"execvp: No such file or directory\n"},"thread_id":69932978885420,"fiber_id":69932991380440,"process_id":28688,"file":"/var/vcap/data/packages/warden/724f030f2d6e90d02c2afbe90ed5fe1ce2de1667/warden/lib/warden/container/spawn.rb","lineno":135,"method":"set_deferred_success"}
{"timestamp":1442731880.412572,"message":"Container destroyed","log_level":"debug","source":"Warden::Container::Linux","data":{"handle":"18vtqk14egk"},"thread_id":69932978885420,"fiber_id":69932978855160,"process_id":28688,"file":"/var/vcap/data/packages/warden/724f030f2d6e90d02c2afbe90ed5fe1ce2de1667/warden/lib/warden/container/linux.rb","lineno":125,"method":"do_destroy"}
{"timestamp":1442731880.4129846,"message":"destroy (took 0.311137)","log_level":"debug","source":"Warden::Container::Linux","data":{"handle":"18vtqk14egk","request":{},"response":{}},"thread_id":69932978885420,"fiber_id":69932978855160,"process_id":28688,"file":"/var/vcap/data/packages/warden/724f030f2d6e90d02c2afbe90ed5fe1ce2de1667/warden/lib/warden/container/base.rb","lineno":300,"method":"dispatch"}

Could anyone can help with this issue?





Re: 回复:Re: Re: Starting failure with 504

Amit Kumar Gupta
 

Hi,

If you'd like the core development team to know this information when
helping you with this problem, I'd recommend posting this and all future
information on the Github issue you opened (please correct me if it's not
you who opened that issue). It's difficult to respond to an issue that's
being discussed independently in two places.

Thanks,
Amit

On Mon, Sep 21, 2015 at 12:17 AM, yancey0623 <yancey0623(a)163.com> wrote:

Hi!

I tried reinstall the cf with:

1) /var/vcap/bosh/bin/monit stop all

2) kill monit process

3) rm -rf /var/vcap

4) cd {cf_nise_installer} && ./script/install.sh


but the error doesn’t fixed…

原始邮件
*发件人:* yancey0623<yancey0623(a)163.com>
*收件人:* Discussions about Cloud Foundry projects and the system overall.<
cf-dev(a)lists.cloudfoundry.org>
*发送时间:* 2015年9月21日(周一) 09:36
*主题:* 回复:[cf-dev] Re: Re: Starting failure with 504

Thanks Amit!

I can upgrade my host OS later. but i can push my app before.



原始邮件
*发件人:* Amit Gupta<agupta(a)pivotal.io>
*收件人:* Discussions about Cloud Foundry projects and the system overall.<
cf-dev(a)lists.cloudfoundry.org>
*发送时间:* 2015年9月21日(周一) 02:55
*主题:* [cf-dev] Re: Re: Starting failure with 504

Hi there,

I noticed you opened an issue on Github with the same problem. I'll ask
the core developer team to help with your issue, they will respond on
Github.

Best,
Amit

On Sun, Sep 20, 2015 at 11:37 AM, Aleksey Zalesov <
aleksey.zalesov(a)altoros.com> wrote:

Hello!

Ubuntu 14.04 is required for cf_nise_installer [1]. Can you upgrade your
host OS and see if issue persists?

[1]: https://github.com/yudai/cf_nise_installer

Aleksey Zalesov | CloudFoundry Engineer | Altoros
Tel: (617) 841-2121 ext. 5707 | Toll free: 855-ALTOROS
Fax: (866) 201-3646 | Skype: aleksey_zalesov
www.altoros.com | blog.altoros.com | twitter.com/altoros


On 20 Sep 2015, at 14:08, yancey0623 <yancey0623(a)163.com> wrote:

Hi!
I push my demo app with command: cf push xxx

cf_release version:212

os:ubuntu12.04

deploy with cf_nise_installer

here is the failure message:

Updating app hello-python in org DevBox / space bre as admin...
OK

Uploading hello-python...
Uploading app files from: /home/cf/xu.yan/hello-python
Uploading 2.7K, 7 files
Done uploading
OK

Stopping app hello-python in org DevBox / space bre as admin...
OK

Starting app hello-python in org DevBox / space bre as admin...
FAILED
Server error, status code: 504, error code: 0, message:

in de_next.log


{"timestamp":1442731880.415194,"message":"staging.task.failed","log_level":"info","source":"Staging","data":{"app_guid":"175d577d-46c4-4311-899f-eacdae64d164","task_id":"02f394d2b5b24d56a8bcdf4ff0618c9e","error":"command exited with failure","backtrace":["/var/vcap/packages/dea_next/vendor/cache/warden-ad18bff7dc56/em-warden-client/lib/em/warden/client/connection.rb:27:in `get'","/var/vcap/packages/dea_next/vendor/cache/warden-ad18bff7dc56/em-warden-client/lib/em/warden/client.rb:46:in `call'","/var/vcap/packages/dea_next/lib/container/container.rb:192:in `call'","/var/vcap/packages/dea_next/lib/container/container.rb:153:in `block in new_container_with_bind_mounts_and_rootfs'","/var/vcap/packages/dea_next/lib/container/container.rb:229:in `call'","/var/vcap/packages/dea_next/lib/container/container.rb:229:in `with_em'","/var/vcap/packages/dea_next/lib/container/container.rb:137:in `new_container_with_bind_mounts_and_rootfs'","/var/vcap/packages/dea_next/lib/container/container.rb:120:in `block in create_container'","/var/vcap/packages/dea_next/lib/container/container.rb:229:in `call'","/var/vcap/packages/dea_next/lib/container/container.rb:229:in `with_em'","/var/vcap/packages/dea_next/lib/container/container.rb:119:in `create_container'","/var/vcap/packages/dea_next/lib/dea/staging/staging_task.rb:543:in `resolve_staging_setup'","/var/vcap/packages/dea_next/lib/dea/staging/staging_task.rb:49:in `block in start'","/var/vcap/packages/dea_next/lib/dea/promise.rb:92:in `call'","/var/vcap/packages/dea_next/lib/dea/promise.rb:92:in `block in run'"]},"thread_id":69965282702100,"fiber_id":69965294576760,"process_id":44238,"file":"/var/vcap/packages/dea_next/lib/dea/staging/staging_task.rb","lineno":57,"method":"block in start"}
{"timestamp":1442731880.4168656,"message":"task.destroy.invalid","log_level":"error","source":"Staging","data":{"app_guid":"175d577d-46c4-4311-899f-eacdae64d164","task_id":"02f394d2b5b24d56a8bcdf4ff0618c9e"},"thread_id":69965282702100,"fiber_id":69965294368660,"process_id":44238,"file":"/var/vcap/packages/dea_next/lib/dea/task.rb","lineno":66,"method":"block in promise_destroy"}
root(a)bjlg-80p50-cf-02:/var/vcap/sys/log/dea_next#

in warden.log

{"timestamp":1442731879.5131524,"message":"Container created","log_level":"debug","source":"Warden::Container::Linux","data":{"handle":"18vtqk14egk"},"thread_id":69932978885420,"fiber_id":69932978855160,"process_id":28688,"file":"/var/vcap/data/packages/warden/724f030f2d6e90d02c2afbe90ed5fe1ce2de1667/warden/lib/warden/container/linux.rb","lineno":102,"method":"do_create"}
{"timestamp":1442731880.101295,"message":"Exited with status 1 (0.587s): [[\"/var/vcap/data/packages/warden/724f030f2d6e90d02c2afbe90ed5fe1ce2de1667/warden/src/closefds/closefds\", \"/var/vcap/data/packages/warden/724f030f2d6e90d02c2afbe90ed5fe1ce2de1667/warden/src/closefds/closefds\"], \"/var/vcap/data/warden/depot/18vtqk14egk/start.sh\"]","log_level":"warn","source":"Warden::Container::Linux","data":{"handle":"18vtqk14egk","stdout":"","stderr":""},"thread_id":69932978885420,"fiber_id":69932991380440,"process_id":28688,"file":"/var/vcap/data/packages/warden/724f030f2d6e90d02c2afbe90ed5fe1ce2de1667/warden/lib/warden/container/spawn.rb","lineno":135,"method":"set_deferred_success"}
{"timestamp":1442731880.104105,"message":"Exited with status 255 (0.001s): [[\"/var/vcap/data/packages/warden/724f030f2d6e90d02c2afbe90ed5fe1ce2de1667/warden/src/closefds/closefds\", \"/var/vcap/data/packages/warden/724f030f2d6e90d02c2afbe90ed5fe1ce2de1667/warden/src/closefds/closefds\"], \"/var/vcap/data/packages/warden/724f030f2d6e90d02c2afbe90ed5fe1ce2de1667/warden/root/linux/stop.sh\", \"/var/vcap/data/warden/depot/18vtqk14egk\", \"-w\", \"0\"]","log_level":"warn","source":"Warden::Container::Linux","data":{"handle":"18vtqk14egk","stdout":"","stderr":"execvp: No such file or directory\n"},"thread_id":69932978885420,"fiber_id":69932991380440,"process_id":28688,"file":"/var/vcap/data/packages/warden/724f030f2d6e90d02c2afbe90ed5fe1ce2de1667/warden/lib/warden/container/spawn.rb","lineno":135,"method":"set_deferred_success"}
{"timestamp":1442731880.412572,"message":"Container destroyed","log_level":"debug","source":"Warden::Container::Linux","data":{"handle":"18vtqk14egk"},"thread_id":69932978885420,"fiber_id":69932978855160,"process_id":28688,"file":"/var/vcap/data/packages/warden/724f030f2d6e90d02c2afbe90ed5fe1ce2de1667/warden/lib/warden/container/linux.rb","lineno":125,"method":"do_destroy"}
{"timestamp":1442731880.4129846,"message":"destroy (took 0.311137)","log_level":"debug","source":"Warden::Container::Linux","data":{"handle":"18vtqk14egk","request":{},"response":{}},"thread_id":69932978885420,"fiber_id":69932978855160,"process_id":28688,"file":"/var/vcap/data/packages/warden/724f030f2d6e90d02c2afbe90ed5fe1ce2de1667/warden/lib/warden/container/base.rb","lineno":300,"method":"dispatch"}

Could anyone can help with this issue?


回复:Re: Re: Starting failure with 504

Yancey
 

Hi!

I tried reinstall the cf with:

1) /var/vcap/bosh/bin/monit stop all

2) kill monit process

3) rm -rf /var/vcap

4) cd {cf_nise_installer} && ./script/install.sh


but the error doesn’t fixed…


 原始邮件 
发件人: yancey0623<yancey0623@...>
收件人: Discussions about Cloud Foundry projects and the system overall.<cf-dev@...>
发送时间: 2015年9月21日(周一) 09:36
主题: 回复:[cf-dev] Re: Re: Starting failure with 504

Thanks Amit!

I can upgrade my host OS later. but i can push my app before.



 原始邮件 
发件人: Amit Gupta<agupta@...>
收件人: Discussions about Cloud Foundry projects and the system overall.<cf-dev@...>
发送时间: 2015年9月21日(周一) 02:55
主题: [cf-dev] Re: Re: Starting failure with 504

Hi there,

I noticed you opened an issue on Github with the same problem.  I'll ask the core developer team to help with your issue, they will respond on Github.

Best,
Amit

On Sun, Sep 20, 2015 at 11:37 AM, Aleksey Zalesov <aleksey.zalesov@...> wrote:
Hello!

Ubuntu 14.04 is required for cf_nise_installer [1]. Can you upgrade your host OS and see if issue persists? 


Aleksey Zalesov | CloudFoundry Engineer | Altoros
Tel: (617) 841-2121 ext. 5707 | Toll free: 855-ALTOROS
Fax: (866) 201-3646 | Skype: aleksey_zalesov


On 20 Sep 2015, at 14:08, yancey0623 <yancey0623@...> wrote:

Hi!
I push my demo app with command: cf push xxx 

cf_release version:212

os:ubuntu12.04

deploy with cf_nise_installer

here is the failure message:

Updating app hello-python in org DevBox / space bre as admin...
OK

Uploading hello-python...
Uploading app files from: /home/cf/xu.yan/hello-python
Uploading 2.7K, 7 files
Done uploading               
OK

Stopping app hello-python in org DevBox / space bre as admin...
OK

Starting app hello-python in org DevBox / space bre as admin...
FAILED
Server error, status code: 504, error code: 0, message:

in de_next.log


{"timestamp":1442731880.415194,"message":"staging.task.failed","log_level":"info","source":"Staging","data":{"app_guid":"175d577d-46c4-4311-899f-eacdae64d164","task_id":"02f394d2b5b24d56a8bcdf4ff0618c9e","error":"command exited with failure","backtrace":["/var/vcap/packages/dea_next/vendor/cache/warden-ad18bff7dc56/em-warden-client/lib/em/warden/client/connection.rb:27:in `get'","/var/vcap/packages/dea_next/vendor/cache/warden-ad18bff7dc56/em-warden-client/lib/em/warden/client.rb:46:in `call'","/var/vcap/packages/dea_next/lib/container/container.rb:192:in `call'","/var/vcap/packages/dea_next/lib/container/container.rb:153:in `block in new_container_with_bind_mounts_and_rootfs'","/var/vcap/packages/dea_next/lib/container/container.rb:229:in `call'","/var/vcap/packages/dea_next/lib/container/container.rb:229:in `with_em'","/var/vcap/packages/dea_next/lib/container/container.rb:137:in `new_container_with_bind_mounts_and_rootfs'","/var/vcap/packages/dea_next/lib/container/container.rb:120:in `block in create_container'","/var/vcap/packages/dea_next/lib/container/container.rb:229:in `call'","/var/vcap/packages/dea_next/lib/container/container.rb:229:in `with_em'","/var/vcap/packages/dea_next/lib/container/container.rb:119:in `create_container'","/var/vcap/packages/dea_next/lib/dea/staging/staging_task.rb:543:in `resolve_staging_setup'","/var/vcap/packages/dea_next/lib/dea/staging/staging_task.rb:49:in `block in start'","/var/vcap/packages/dea_next/lib/dea/promise.rb:92:in `call'","/var/vcap/packages/dea_next/lib/dea/promise.rb:92:in `block in run'"]},"thread_id":69965282702100,"fiber_id":69965294576760,"process_id":44238,"file":"/var/vcap/packages/dea_next/lib/dea/staging/staging_task.rb","lineno":57,"method":"block in start"}
{"timestamp":1442731880.4168656,"message":"task.destroy.invalid","log_level":"error","source":"Staging","data":{"app_guid":"175d577d-46c4-4311-899f-eacdae64d164","task_id":"02f394d2b5b24d56a8bcdf4ff0618c9e"},"thread_id":69965282702100,"fiber_id":69965294368660,"process_id":44238,"file":"/var/vcap/packages/dea_next/lib/dea/task.rb","lineno":66,"method":"block in promise_destroy"}
root@bjlg-80p50-cf-02:/var/vcap/sys/log/dea_next#

in warden.log

{"timestamp":1442731879.5131524,"message":"Container created","log_level":"debug","source":"Warden::Container::Linux","data":{"handle":"18vtqk14egk"},"thread_id":69932978885420,"fiber_id":69932978855160,"process_id":28688,"file":"/var/vcap/data/packages/warden/724f030f2d6e90d02c2afbe90ed5fe1ce2de1667/warden/lib/warden/container/linux.rb","lineno":102,"method":"do_create"}
{"timestamp":1442731880.101295,"message":"Exited with status 1 (0.587s): [[\"/var/vcap/data/packages/warden/724f030f2d6e90d02c2afbe90ed5fe1ce2de1667/warden/src/closefds/closefds\", \"/var/vcap/data/packages/warden/724f030f2d6e90d02c2afbe90ed5fe1ce2de1667/warden/src/closefds/closefds\"], \"/var/vcap/data/warden/depot/18vtqk14egk/start.sh\"]","log_level":"warn","source":"Warden::Container::Linux","data":{"handle":"18vtqk14egk","stdout":"","stderr":""},"thread_id":69932978885420,"fiber_id":69932991380440,"process_id":28688,"file":"/var/vcap/data/packages/warden/724f030f2d6e90d02c2afbe90ed5fe1ce2de1667/warden/lib/warden/container/spawn.rb","lineno":135,"method":"set_deferred_success"}
{"timestamp":1442731880.104105,"message":"Exited with status 255 (0.001s): [[\"/var/vcap/data/packages/warden/724f030f2d6e90d02c2afbe90ed5fe1ce2de1667/warden/src/closefds/closefds\", \"/var/vcap/data/packages/warden/724f030f2d6e90d02c2afbe90ed5fe1ce2de1667/warden/src/closefds/closefds\"], \"/var/vcap/data/packages/warden/724f030f2d6e90d02c2afbe90ed5fe1ce2de1667/warden/root/linux/stop.sh\", \"/var/vcap/data/warden/depot/18vtqk14egk\", \"-w\", \"0\"]","log_level":"warn","source":"Warden::Container::Linux","data":{"handle":"18vtqk14egk","stdout":"","stderr":"execvp: No such file or directory\n"},"thread_id":69932978885420,"fiber_id":69932991380440,"process_id":28688,"file":"/var/vcap/data/packages/warden/724f030f2d6e90d02c2afbe90ed5fe1ce2de1667/warden/lib/warden/container/spawn.rb","lineno":135,"method":"set_deferred_success"}
{"timestamp":1442731880.412572,"message":"Container destroyed","log_level":"debug","source":"Warden::Container::Linux","data":{"handle":"18vtqk14egk"},"thread_id":69932978885420,"fiber_id":69932978855160,"process_id":28688,"file":"/var/vcap/data/packages/warden/724f030f2d6e90d02c2afbe90ed5fe1ce2de1667/warden/lib/warden/container/linux.rb","lineno":125,"method":"do_destroy"}
{"timestamp":1442731880.4129846,"message":"destroy (took 0.311137)","log_level":"debug","source":"Warden::Container::Linux","data":{"handle":"18vtqk14egk","request":{},"response":{}},"thread_id":69932978885420,"fiber_id":69932978855160,"process_id":28688,"file":"/var/vcap/data/packages/warden/724f030f2d6e90d02c2afbe90ed5fe1ce2de1667/warden/lib/warden/container/base.rb","lineno":300,"method":"dispatch"}

Could anyone can help with this issue?




Re: User cannot do CF login when UAA is being updated

Yunata, Ricky <rickyy@...>
 

Hi Joseph & all,

Hi Joseph, have you received the attachment from Dies?
To everyone else, I just wanted to know if this is the normal behaviour of CF that user is logged out when UAA is being updated, or is it because I have my manifest wrongly configured.
It would be helpful if anyone can give me some answer based on their experience. Thanks

Regards,
Ricky



From: CF Runtime [mailto:cfruntime(a)gmail.com]
Sent: Wednesday, 16 September 2015 7:08 PM
To: Discussions about Cloud Foundry projects and the system overall.
Subject: [cf-dev] Re: Re: Re: Re: Re: Re: User cannot do CF login when UAA is being updated

If you can't get the list to accept the attachment, you can give it to Dies and he should be able to get it to us.

Joseph
OSS Release Integration Team

On Tue, Sep 15, 2015 at 7:19 PM, Yunata, Ricky <rickyy(a)fast.au.fujitsu.com<mailto:rickyy(a)fast.au.fujitsu.com>> wrote:
Hi Joseph,

Yes that is the case. I have sent my test result but it seems that my e-mail does not get through. How can I sent attachment in this mailing list?

Regards,
Ricky


From: CF Runtime [mailto:cfruntime(a)gmail.com<mailto:cfruntime(a)gmail.com>]
Sent: Tuesday, 15 September 2015 8:10 PM
To: Discussions about Cloud Foundry projects and the system overall.
Subject: [cf-dev] Re: Re: Re: Re: User cannot do CF login when UAA is being updated

Couple of updates here for clarity. No databases are stored on NFS in any default installation. NFS is only used to store blobstore data. If you are using the postgres job from cf-release, since it is single node there will be downtime during a stemcell deploy.

I talked with Dies from Fujitsu earlier and confirmed they are NOT using the postgres job but an external non-cf deployed postgres instance. So during a deploy, the UAA db should be up and available the entire time.

The issue they are seeing is that even though the database is up, and I'm guessing there is at least a single node of UAA up during the deploy, there are still login failures.

Joseph
OSS Release Integration Team

On Mon, Sep 14, 2015 at 6:39 PM, Filip Hanik <fhanik(a)pivotal.io<mailto:fhanik(a)pivotal.io>> wrote:
Amit, see previous comment.

Postgresql database is stored on NFS that is restarted during nfs job update.
UAA, while being up, is non functional while the NFS job is updated because it can't get to the DB.



On Mon, Sep 14, 2015 at 5:09 PM, Amit Gupta <agupta(a)pivotal.io<mailto:agupta(a)pivotal.io>> wrote:
Hi Ricky,

My understanding is that you still need help, and the issues Jiang and Alexander raised are different. To avoid confusion, let's keep this thread focused on your issue.

Can you confirm that you have two UAA VMs in separate bosh jobs, separate AZs, etc. Can you confirm that when you roll the UAAs, only one goes down at a time? The simplest way to affect a roll is to change some trivial property in the manifest for your UAA jobs. If you're using v215, any of the properties referenced here will do:

https://github.com/cloudfoundry/cf-release/blob/v215/jobs/uaa/spec#L321-L335

You should confirm that only one UAA is down at a time, and comes back up before bosh moves on to updating the other UAA.

While this roll is happening, can you just do `CF_TRACE=true cf auth USERNAME PASSWORD` in a loop, and if you see one that fails, post the output, along with noting the state of the bosh deploy when the error happens.

Thanks,
Amit

On Mon, Sep 14, 2015 at 10:51 AM, Amit Gupta <agupta(a)pivotal.io<mailto:agupta(a)pivotal.io>> wrote:
Ricky, Jiang, Alexander, are the three of you working together? It's hard to tell since you've got Fujitsu, Gmail, and Altoros email addresses. Are you folks talking about the same issue with the same deployment, or three separate issues.

Ricky, if you still need assistance with your issue, please let us know.

On Mon, Sep 14, 2015 at 10:16 AM, Lomov Alexander <alexander.lomov(a)altoros.com<mailto:alexander.lomov(a)altoros.com>> wrote:
Yes, the problem is that postgresql database is stored on NFS that is restarted during nfs job update. I’m sure that you’ll be able to run updates without outage with several customizations.

It is hard to tell without knowing your environment, but in common case steps will be following:


1. Add additional instances to nfs job and customize it to make replications (for instance use this docs for release customization [1])
2. Make your NFS job to update sequently without our jobs updates in parallel (like it is done for postgresql [2])
3. Check your options in update section [3].

[1] https://help.ubuntu.com/community/HighlyAvailableNFS
[2] https://github.com/cloudfoundry/cf-release/blob/master/example_manifests/minimal-aws.yml#L115-L116
[3] https://github.com/cloudfoundry/cf-release/blob/master/example_manifests/minimal-aws.yml#L57-L62

On Sep 14, 2015, at 9:47 AM, Yitao Jiang <jiangyt.cn(a)gmail.com<mailto:jiangyt.cn(a)gmail.com>> wrote:

On upgrading the deployment, the uaa not working due the uaadb filesystem hangup.Under my environment , the nfs-wal-server's ip changed which causing uaadb,ccdb hang up. Hard reboot the uaadb, restart uaa service solve the issue.

Hopes can help you.

On Mon, Sep 14, 2015 at 2:13 PM, Yunata, Ricky <rickyy(a)fast.au.fujitsu.com<mailto:rickyy(a)fast.au.fujitsu.com>> wrote:
Hello,

I have a question regarding UAA in Cloud Foundry. I’m currently running Cloud Foundry on Openstack.
I have 2 availability zones and redundancy of the important VMs including UAA.
Whenever I do an upgrade of either stemcell or CF release, user will not be able to do CF login when when CF is updating UAA VM.
My question is, is this a normal behaviour? If I have redundant UAA VM, shouldn’t user still be able to still login to the apps even though it’s being updated?
I’ve done this test a few times, with different CF version and stemcells and all of them are giving me the same result. The latest test that I’ve done was to upgrade CF version from 212 to 215.
Has anyone experienced the same issue?

Regards,
Ricky
Disclaimer

The information in this e-mail is confidential and may contain content that is subject to copyright and/or is commercial-in-confidence and is intended only for the use of the above named addressee. If you are not the intended recipient, you are hereby notified that dissemination, copying or use of the information is strictly prohibited. If you have received this e-mail in error, please telephone Fujitsu Australia Software Technology Pty Ltd on + 61 2 9452 9000<tel:%2B%2061%202%209452%209000> or by reply e-mail to the sender and delete the document and all copies thereof.


Whereas Fujitsu Australia Software Technology Pty Ltd would not knowingly transmit a virus within an email communication, it is the receiver’s responsibility to scan all communication and any files attached for computer viruses and other defects. Fujitsu Australia Software Technology Pty Ltd does not accept liability for any loss or damage (whether direct, indirect, consequential or economic) however caused, and whether by negligence or otherwise, which may result directly or indirectly from this communication or any files attached.


If you do not wish to receive commercial and/or marketing email messages from Fujitsu Australia Software Technology Pty Ltd, please email unsubscribe(a)fast.au.fujitsu.com<mailto:unsubscribe(a)fast.au.fujitsu.com>




--

Regards,

Yitao
jiangyt.github.io<http://jiangyt.github.io/>





Disclaimer

The information in this e-mail is confidential and may contain content that is subject to copyright and/or is commercial-in-confidence and is intended only for the use of the above named addressee. If you are not the intended recipient, you are hereby notified that dissemination, copying or use of the information is strictly prohibited. If you have received this e-mail in error, please telephone Fujitsu Australia Software Technology Pty Ltd on + 61 2 9452 9000<tel:%2B%2061%202%209452%209000> or by reply e-mail to the sender and delete the document and all copies thereof.


Whereas Fujitsu Australia Software Technology Pty Ltd would not knowingly transmit a virus within an email communication, it is the receiver’s responsibility to scan all communication and any files attached for computer viruses and other defects. Fujitsu Australia Software Technology Pty Ltd does not accept liability for any loss or damage (whether direct, indirect, consequential or economic) however caused, and whether by negligence or otherwise, which may result directly or indirectly from this communication or any files attached.


If you do not wish to receive commercial and/or marketing email messages from Fujitsu Australia Software Technology Pty Ltd, please email unsubscribe(a)fast.au.fujitsu.com<mailto:unsubscribe(a)fast.au.fujitsu.com>

Disclaimer

The information in this e-mail is confidential and may contain content that is subject to copyright and/or is commercial-in-confidence and is intended only for the use of the above named addressee. If you are not the intended recipient, you are hereby notified that dissemination, copying or use of the information is strictly prohibited. If you have received this e-mail in error, please telephone Fujitsu Australia Software Technology Pty Ltd on + 61 2 9452 9000 or by reply e-mail to the sender and delete the document and all copies thereof.


Whereas Fujitsu Australia Software Technology Pty Ltd would not knowingly transmit a virus within an email communication, it is the receiver’s responsibility to scan all communication and any files attached for computer viruses and other defects. Fujitsu Australia Software Technology Pty Ltd does not accept liability for any loss or damage (whether direct, indirect, consequential or economic) however caused, and whether by negligence or otherwise, which may result directly or indirectly from this communication or any files attached.


If you do not wish to receive commercial and/or marketing email messages from Fujitsu Australia Software Technology Pty Ltd, please email unsubscribe(a)fast.au.fujitsu.com

7541 - 7560 of 9426