Date   

Re: pcf_dev

Anwar Chirakkattil <ambumon786@...>
 

Hi Harneet,

There are certain limitations when you push a docker image to cloud foundry. As per the docs :

"By default, apps listen for connections on the port specified in the PORT environment variable for the app. Cloud Foundry allocates this value dynamically.

When configuring a Docker image for Cloud Foundry, you can control the exposed port and the corresponding value of PORT by specifying the EXPOSE directive in the image Dockerfile. If you specify the EXPOSE directive, then the corresponding app pushed to Cloud Foundry listens on that exposed port. For example, if you set EXPOSE to 7070, then the app listens for connections on port 7070.

If you do not specify a port in the EXPOSE directive, then the app listens on the value of the PORT environment variable as determined by Cloud Foundry.

If you set the PORT environment variable via an ENV directive in a Dockerfile, Cloud Foundry overrides the value with the system-determined value.

Cloud Foundry supports only one exposed port on the image." 

So, in your case, make sure that your image exposes the correct port which can be accessed from outside.

Thanks,
Anwar



Ordering Logs from Loggregator #loggregator

Adam Hevenor
 

Recently the Loggregator team has been researching the implications of streaming and the lack of ordering guarantee associated with it. I wanted to post a few findings as it is a common inquiry and something the community should be aware of best development practices around. 

  1. If you are developing a client that displays a stream of logs to users you should consider ordering them to improve the debugging UX. There are a couple common techniques for this. One is to batch the logs and display whatever logs you have in that timeframe sorted by timestamp. This is good for CLI's. For web clients you can use dynamic HTML to insert older logs into the sorting as they appear. This is nice because by the time a user grabs the content for a copy paste it will likely be both complete and in order. 
  2. The CLI batches and sorts logs for the user, but this functionality was taken out in a recent version of the CLI and only recently re-introduced in version 6.33.1. The cf cli uses a wait period of 300ms for batching the logs which is not noticeable in my experience.
  3. The firehose does not offer a deterministic routing mechanism for ordering. 
  4. Syslog drains are serviced by two adapters for HA reasons so ingestors may need to be configured to take full advantage of the nanosecond precision on timestamps and ensure proper sorting at rest. Here are some helpful instructions we have found for both ELK and Splunk 
Thanks
Adam Hevenor
Loggregator PM


Re: Independent AZs deployments vs. streched deployments

David Sabeti
 

I just thought I'd point out that cf-deployment doesn't include etcd any longer. It also includes ops-files for deploying the experimental BOSH DNS release, which obviates the need for consul as well -- depending on your aversion to risk, or lack thereof, you may consider getting rid of consul using those ops-files. Alternatively, you could deploy consul to two AZs in the same way that Marco described deploying MySQL -- one consul node in z1, and two nodes in z2.

Are you using cf-mysql as the internal database? If so, that's the last component in that uses three AZs, and it sounds like Marco has some options for you there.

On Fri, Jan 12, 2018 at 7:44 AM Marco Nicosia <mnicosia@...> wrote:
I can’t speak for Consul nor Etcd, but cf-MySQL, in HA mode, can be deployed in a 2+1 mode where you’re still deploying an odd number of VMs, for consensus, but only deployed to two AZs.

This gives you some degree of protection to AZ failure, but yes, not the same resiliency as three.

Let’s say you have two AZs, the “major,” with two cluster nodes, and the “minor” with only one.

With two AZs, if there is a net split, the AZ with the majority of cluster nodes will continue to run.

If you suffer an AZ failure, however, you run a 50/50 chance of staying up. If the major AZ goes down, you’re down. In that respect you’re no better than running a single AZ. However you may have a faster path to recovery, in that you will have a seed to rebuild in the minor AZ possibly more quickly than rebuilding the major AZ.

This differs from traditional HA where there are two equal master nodes, one in each AZ. This does give the ability to fail back and forth between the two, but typically not automatically.

If you are interested in investing some effort, we could probably work together to establish the processes by which one could “recover” the minor AZ to run in non-HA mode. I think it should be possible to write yourself a “break glass in case of emergency” manifest which you could use to redeploy the minor AZ to restore availability.


On Fri, Jan 12, 2018 at 05:12 Juan Pablo Genovese <juanpgenovese@...> wrote:
Matthias, Maxim,

thank you for your input.
In my experience, a stretched architecture is the way to go, I agree on that, but sadly, we are limited to two and only two AZs. The only good thing is that the network connection between them is outstanding.
Just like Maxim said, the problem would be with Consul and etcd, which has to be deployed to odd numbers so the split brain problem can be avoided. For what I know, Consul's consensus is achieved on a (n/2) + 1 active members, so deploying 2+3 Consul nodes won't work.
I wonder if anybody solved this issue or managed to provide a workaround without individual CF deployments.



2018-01-12 11:05 GMT+00:00 Matthias Winzeler <matthias.winzeler@...>:
Hi Juan-Pablo

As long as your AZs are connected using a low-latency network (i.e. within the same data center), you can happily deploy a CF over multiple zones.
This is fully supported by CF/BOSH (from an operator perspective, it makes no difference whether you use multiple zones or 1) and transparent to your users (their apps will be evenly distributed over all zones).

We have around 20 CF installations which all run with 3 zones each. Downtime/maintenance of a zone is not affecting the end users and it's easy to manage it from an ops perspective.

We never tried the data synchronization approach since this requires manual plumbing and brings some additional challenges (i.e. blobstores and backing databases would need to be synchronized).

I think the 1 CF : n AZs setup is also what Pivotal recommends, i.e. for AWS use multiple AZs, for VCenter use multiple clusters.

Best regards
Matthias
Swisscom Application Cloud

2018-01-12 11:33 GMT+01:00 Juan Pablo Genovese <juanpgenovese@...>:
Hello everyone!

I'm having an internal discussion about what is the best way to deploy CF in two different AZs.
One proposal is to deploy two CF foundations, one per each AZ, and synchronize data.
The other proposal is one foundation encompassing both AZs.

Each one of us has its own personal views, but I'm very curious about past experiences and pains you might have had experienced with both models.

Thank you!

--
Mis mejores deseos,
Best wishes,
Meilleurs vœux,

Juan Pablo
------------------------------------------------------
http://www.jpgenovese.com




--




--
Mis mejores deseos,
Best wishes,
Meilleurs vœux,

Juan Pablo
------------------------------------------------------
http://www.jpgenovese.com

--
--
  Marco Nicosia
  Product Manager
  Pivotal Software, Inc.


Re: Independent AZs deployments vs. streched deployments

Marco Nicosia
 

I can’t speak for Consul nor Etcd, but cf-MySQL, in HA mode, can be deployed in a 2+1 mode where you’re still deploying an odd number of VMs, for consensus, but only deployed to two AZs.

This gives you some degree of protection to AZ failure, but yes, not the same resiliency as three.

Let’s say you have two AZs, the “major,” with two cluster nodes, and the “minor” with only one.

With two AZs, if there is a net split, the AZ with the majority of cluster nodes will continue to run.

If you suffer an AZ failure, however, you run a 50/50 chance of staying up. If the major AZ goes down, you’re down. In that respect you’re no better than running a single AZ. However you may have a faster path to recovery, in that you will have a seed to rebuild in the minor AZ possibly more quickly than rebuilding the major AZ.

This differs from traditional HA where there are two equal master nodes, one in each AZ. This does give the ability to fail back and forth between the two, but typically not automatically.

If you are interested in investing some effort, we could probably work together to establish the processes by which one could “recover” the minor AZ to run in non-HA mode. I think it should be possible to write yourself a “break glass in case of emergency” manifest which you could use to redeploy the minor AZ to restore availability.


On Fri, Jan 12, 2018 at 05:12 Juan Pablo Genovese <juanpgenovese@...> wrote:
Matthias, Maxim,

thank you for your input.
In my experience, a stretched architecture is the way to go, I agree on that, but sadly, we are limited to two and only two AZs. The only good thing is that the network connection between them is outstanding.
Just like Maxim said, the problem would be with Consul and etcd, which has to be deployed to odd numbers so the split brain problem can be avoided. For what I know, Consul's consensus is achieved on a (n/2) + 1 active members, so deploying 2+3 Consul nodes won't work.
I wonder if anybody solved this issue or managed to provide a workaround without individual CF deployments.



2018-01-12 11:05 GMT+00:00 Matthias Winzeler <matthias.winzeler@...>:
Hi Juan-Pablo

As long as your AZs are connected using a low-latency network (i.e. within the same data center), you can happily deploy a CF over multiple zones.
This is fully supported by CF/BOSH (from an operator perspective, it makes no difference whether you use multiple zones or 1) and transparent to your users (their apps will be evenly distributed over all zones).

We have around 20 CF installations which all run with 3 zones each. Downtime/maintenance of a zone is not affecting the end users and it's easy to manage it from an ops perspective.

We never tried the data synchronization approach since this requires manual plumbing and brings some additional challenges (i.e. blobstores and backing databases would need to be synchronized).

I think the 1 CF : n AZs setup is also what Pivotal recommends, i.e. for AWS use multiple AZs, for VCenter use multiple clusters.

Best regards
Matthias
Swisscom Application Cloud

2018-01-12 11:33 GMT+01:00 Juan Pablo Genovese <juanpgenovese@...>:
Hello everyone!

I'm having an internal discussion about what is the best way to deploy CF in two different AZs.
One proposal is to deploy two CF foundations, one per each AZ, and synchronize data.
The other proposal is one foundation encompassing both AZs.

Each one of us has its own personal views, but I'm very curious about past experiences and pains you might have had experienced with both models.

Thank you!

--
Mis mejores deseos,
Best wishes,
Meilleurs vœux,

Juan Pablo
------------------------------------------------------
http://www.jpgenovese.com




--




--
Mis mejores deseos,
Best wishes,
Meilleurs vœux,

Juan Pablo
------------------------------------------------------
http://www.jpgenovese.com

--
--
  Marco Nicosia
  Product Manager
  Pivotal Software, Inc.
  c: 650-796-2948


Re: Independent AZs deployments vs. streched deployments

Juan Pablo Genovese
 

Matthias, Maxim,

thank you for your input.
In my experience, a stretched architecture is the way to go, I agree on that, but sadly, we are limited to two and only two AZs. The only good thing is that the network connection between them is outstanding.
Just like Maxim said, the problem would be with Consul and etcd, which has to be deployed to odd numbers so the split brain problem can be avoided. For what I know, Consul's consensus is achieved on a (n/2) + 1 active members, so deploying 2+3 Consul nodes won't work.
I wonder if anybody solved this issue or managed to provide a workaround without individual CF deployments.



2018-01-12 11:05 GMT+00:00 Matthias Winzeler <matthias.winzeler@...>:

Hi Juan-Pablo

As long as your AZs are connected using a low-latency network (i.e. within the same data center), you can happily deploy a CF over multiple zones.
This is fully supported by CF/BOSH (from an operator perspective, it makes no difference whether you use multiple zones or 1) and transparent to your users (their apps will be evenly distributed over all zones).

We have around 20 CF installations which all run with 3 zones each. Downtime/maintenance of a zone is not affecting the end users and it's easy to manage it from an ops perspective.

We never tried the data synchronization approach since this requires manual plumbing and brings some additional challenges (i.e. blobstores and backing databases would need to be synchronized).

I think the 1 CF : n AZs setup is also what Pivotal recommends, i.e. for AWS use multiple AZs, for VCenter use multiple clusters.

Best regards
Matthias
Swisscom Application Cloud

2018-01-12 11:33 GMT+01:00 Juan Pablo Genovese <juanpgenovese@...>:
Hello everyone!

I'm having an internal discussion about what is the best way to deploy CF in two different AZs.
One proposal is to deploy two CF foundations, one per each AZ, and synchronize data.
The other proposal is one foundation encompassing both AZs.

Each one of us has its own personal views, but I'm very curious about past experiences and pains you might have had experienced with both models.

Thank you!

--
Mis mejores deseos,
Best wishes,
Meilleurs vœux,

Juan Pablo
------------------------------------------------------
http://www.jpgenovese.com




--




--
Mis mejores deseos,
Best wishes,
Meilleurs vœux,

Juan Pablo
------------------------------------------------------
http://www.jpgenovese.com


Re: Independent AZs deployments vs. streched deployments

Maxim Avezbakiev
 

Hi Juan Pablo,

If the number of AZs is set (you mentioned two), then to have a stretched CF deployment over two AZs will not give you the benefit of highly available CF - few of the internal component of CF are using quorum based clustering, hence an even number of nodes in AZ will not not be able to function properly.
So, in that case, you should consider having two separate foundations (one per AZ). However, as Matthias already mentioned, data synchronisation between two foundations is not an easy undertaking. 

If you can add another AZ into your topology - then having a CF deployed across 3 AZs in a stretched fashion will be a better solution, imho.

Cheers,
Maxim.


Re: Independent AZs deployments vs. streched deployments

Matthias Winzeler
 

Hi Juan-Pablo

As long as your AZs are connected using a low-latency network (i.e. within the same data center), you can happily deploy a CF over multiple zones.
This is fully supported by CF/BOSH (from an operator perspective, it makes no difference whether you use multiple zones or 1) and transparent to your users (their apps will be evenly distributed over all zones).

We have around 20 CF installations which all run with 3 zones each. Downtime/maintenance of a zone is not affecting the end users and it's easy to manage it from an ops perspective.

We never tried the data synchronization approach since this requires manual plumbing and brings some additional challenges (i.e. blobstores and backing databases would need to be synchronized).

I think the 1 CF : n AZs setup is also what Pivotal recommends, i.e. for AWS use multiple AZs, for VCenter use multiple clusters.

Best regards
Matthias
Swisscom Application Cloud

2018-01-12 11:33 GMT+01:00 Juan Pablo Genovese <juanpgenovese@...>:

Hello everyone!

I'm having an internal discussion about what is the best way to deploy CF in two different AZs.
One proposal is to deploy two CF foundations, one per each AZ, and synchronize data.
The other proposal is one foundation encompassing both AZs.

Each one of us has its own personal views, but I'm very curious about past experiences and pains you might have had experienced with both models.

Thank you!

--
Mis mejores deseos,
Best wishes,
Meilleurs vœux,

Juan Pablo
------------------------------------------------------
http://www.jpgenovese.com




--
Matthias Winzeler
Mattenenge 8
3011 Bern
mailto: matthias.winzeler@...


Independent AZs deployments vs. streched deployments

Juan Pablo Genovese
 

Hello everyone!

I'm having an internal discussion about what is the best way to deploy CF in two different AZs.
One proposal is to deploy two CF foundations, one per each AZ, and synchronize data.
The other proposal is one foundation encompassing both AZs.

Each one of us has its own personal views, but I'm very curious about past experiences and pains you might have had experienced with both models.

Thank you!

--
Mis mejores deseos,
Best wishes,
Meilleurs vœux,

Juan Pablo
------------------------------------------------------
http://www.jpgenovese.com


[feedback requested] proposal value substitution in app manifests

Koper, Dies <diesk@...>
 

Hi community,

 

The cf CLI team has explored how to better address the problems that the deprecated app manifest inheritance feature intended to solve, as well as providing secrets when pushing apps without storing them with other app configuration in the app manifest.

Please review & comment:

https://docs.google.com/document/d/1yax7Hjw_YJiKwh2aAwY3r-BtuDpJvoUaPvQ7B3sTju4/edit?usp=sharing

 

We’d like to start development later this month, so please leave your feedback by the end of next week (19 Jan).

 

Cheers,

Dies Koper & Jay Badenhope
Cloud Foundry Product Managers - CLI

 

 


pcf_dev

harneet chugga <harneet.chugga@...>
 

Hello Team,

I am trying to deploy microgateway on pcf dev. I have a docker image and manifest.yml. My image is successfully pushed and started but when i try to run url i get the following error-
502 Bad Gateway: Registered endpoint failed to handle the request.
Please help me how to resolve the problem.

Regards,
Harneet 


proposal to move app manifest processing logic to a server side API

Koper, Dies <diesk@...>
 

Hi community,

 

Zach, Jay and I have been discussing moving the processing logic of the app manifest supported by the cf CLI and Java client to a server API.

This would make it simpler for manifests to be processed in the same way by cf CLI, Java, and any other clients.

For the cf CLI, `v3-push` would use this API to add support for app manifests with CC V3 features such as multiple buildpacks and processes.


Our proposal is in the following document – please provide your comments there.

https://docs.google.com/document/d/1JBWFP86t5mgu7_Cie97AIDlsFGyHIGsfJb3kmcNtHZE/edit?usp=sharing

 

We’d like to start this work in the next two weeks and plan to start development based on comments received by the end of next week (Fri 19 Jan).

 

Cheers,

Dies Koper, Jay Badenhope, Zach Robinson
Cloud Foundry Product Managers – CLI, CAPI

 

 


Re: CF CLI v6.34.0 Released Today - New push implementation, adds symlink support, deprecates some app manifest features

Michael Maximilien
 

Congrats to the CLI team on the new push!

This was clearly a major undertaking and feature list looks superb.

Can't wait to try it. Cheers all,

max

On Tue, Jan 9, 2018 at 8:53 PM, Koper, Dies <diesk@...> wrote:

The CF CLI team cut 6.34.0 today.

Deb, yum and Homebrew repos have been updated; binaries, installers and link to release notes are available at:

 

https://github.com/cloudfoundry/cli#downloads

New push implementation

In this cf CLI release, the v2-push that was exposed in cf CLI 6.33.0 has become the default push. It addresses a number of issues, adds improvements to performance and stability of the push process, and deprecates some app manifest features. This release will make push easier to maintain and enhance in the future.

 

push Fixes and Enhancements

  • push initially compares the "current" state with the "desired" state and displays this in a diff-like format to give a quick understanding of the updates it is going to make.
  • push now allows environment variables with ${...} in them in a manifest file. (#682)
  • push now preserves relative symlinks in app files. This makes it easier to work with Node.js apps using npm link, but note that it now errors when it detects a symlink pointing outside the app folder (e.g. a node_modules folder containing external symlinks). (#903)
  • push has a clarified error message when the route is not in the same space as the app. (#977)
  • The pattern format for the .cfignore exclusion file had not been well defined. To address this, push uses an external library that is compatible with git's rules for .gitignore. Folders containing only a .gitignore file are now included. (#993)
  • push creates many fewer temporary files during the package creation and upload process. This reduced push time from 21 minutes to 4 minutes in one case. (#1006)
  • push breaks up the API call to check the Cloud Controller cache for existing app files in batches to reduce the chance of timeouts. (#1123)
  • push resolves an issue when no value is specified for services in the app manifest. (#1142)
  • push resolves issues with platform-specific case sensitivity and locking of file and directory names by processing app bits in memory instead of writing them to disk. (#1147 and #1223)
  • push no longer adds new routes when updating an app and using --random-route. (#1177)
  • push resolves an issue with the generated random hostname for an app pushed with a non-ASCII app name. (#1214)
  • push has a new, smaller dictionary to generate random HTTP routes in order to avoid the use of questionable words. (Also resolves #1283)

Deprecations

  • App manifest deprecations getting a grace period:
    • For at least the next six months, when you use these features, the "old" push implementation is invoked and a deprecation message will be displayed. In this case, the fixes and enhancements of the "new" push (listed above) will not be invoked.
    • See blog post regarding app manifest changes on https://www.cloudfoundry.org/blog/coming-changes-app-manifest-simplification/ for more details. Please review your app manifests to see if they use the deprecated features.
    • push no longer supports app manifest route declarations using any of hosthostsdomaindomains or no-hostname attributes. You can use route attributes instead.
  • App manifest deprecations effective immediately (no grace period):
    • push no longer processes ${random-word} in the app manifest. We recently discovered this undocumented feature. If the intent was to create a random hostname, you can use random-route: true in the app manifest.
  • Other deprecations:
    • push does not accept conflicting flag combinations such as cf v2-push myapp --no-route --random-route.

Updated commands

  • install-plugin now displays a warning on its help page about plugins from untrusted authors.
  • install-plugin now displays the correct version of an existing plugin when installing a different version. (#946 (comment))

New & updated community plugins

See https://plugins.cloudfoundry.org/ for more information:

  • Swisscom Application Cloud v0.1.1 - The official Swisscom Application Cloud plugin gives you access to all the additional features of the App Cloud

Enjoy!

 

Regards,

Dies Koper & Jay Badenhope
Cloud Foundry Product Managers - CLI

 

 





Your security story on CFF blog.

Caitlyn O'Connell <coconnell@...>
 

Hi all:

Cloud Foundry Foundation is partnering with Snyk to publish a monthly series on security in open source and cloud. These installments will publish on the Cloud Foundry blog, and may take the form of Q+A or long-form posts. Snyk is a recent addition to the Foundation membership and is an open source-oriented security start-up. 

Do you have a great security story? Are you working on security initiatives on your team? How has Cloud Foundry impacted your security story?

We'd love to hear from you and feature you on the blog. Please reach out to me directly and I'll get you in touch with the folks at Snyk so we can publish your company's security story.

Best,
Caitlyn

--
Caitlyn O'Connell
Marketing Communications Manager
Cloud Foundry Foundation
818 439 5079 | @caitlyncaleah

Want to contribute to our blog? Email content@...


CAB call for January is Wednesday 1/17 @ 8a PST

Michael Maximilien
 

FYI...

Hi, all,
 
Happy New Year 2018! 
 
First CAB call is scheduled for Wednesday 1/17 @ 8a PST.
 
WIP agenda in [1] but summary:
  1. CFF updates.
  2. highlights from different PMCs and teams.
  3. summary of the survey I sent last month. There is still time to take it if you have not already, go here: https://www.surveymonkey.com/r/W9PZCK2
  4. TBD
------
As you can see, there is one spot for a new project talk if you want to suggest or nominate one. Please do so by Monday. Reply here to me or on Slack.

Talk soon. Best,
 


We are groot! Merging garden and groot (ACTION MAY BE REQUIRED)

Julz Friedman
 

Hi cf-dev garden and groot fans!


I'd like to quickly discuss some changes we're making to garden (the cloud foundry app runtime’s container api) and groot (the next generation rootfs management library which has been being developed as a separate release at https://github.com/cloudfoundry/grootfs-release) with the next version of garden-runc-release.


Tl;dr: in the next garden-runc-release we're going to ship grootfs built-in. Therefore we're EOLing the separate grootfs-release (eventually we’ll also EOL the existing aufs-based rootfs management code in garden-runc, although for now you can opt back-in to it if you need to; we have no timeline for this yet). This means there will be no more updates to grootfs-release: users of grootfs can simply update to the latest garden-runc-release (see notes below on migrating - tl;dr: you don’t have to do anything!).


For people who haven’t been using grootfs so far, what this means is from now on you just need to deploy garden-runc-release as usual and you’ll get the nice new grootfs stuff for free -- see below for why this is good. If you have a specific need to keep using the previous aufs-based garden rootfs management code (sometimes known as "garden shed") you'll be able to opt back in to this using the `experimental/use-shed.yml` ops file in cf-deployment [0]. Note that this flag is deprecated and will, at some point, go away, so please let us know if there's anything preventing your migration to the new built-in overlayfs-based groot so we can make sure it gets fixed!


## Migrating


Migration is easy:


  • if you're already using garden and groot separately via cf-deployment, just do nothing, things will continue to work (the use-grootfs.yml ops file will become a no-op in cf-deployment).

  • If you’re deploying without cf-deployment, you’ll need to stop deploying a separate grootfs job (grootfs is deployed automatically inside garden-runc-release v1.11.0).

  • If you're using garden’s built in rootfs management, and you do nothing, next release you'll be using groot however (Action required!) we do recommend people in this case either combine the next garden release - garden-runc-release v1.11.0 - with a stemcell upgrade or perform a `--recreate` deploy, to get rid of any leftover disk space used by the existing rootfs management code’s data directories.


## Why is this good?


Under the covers, grootfs uses the `overlay` filesystem instead of `aufs`, which is better supported in modern kernels. It's also a much more maintainable piece of code and uses the `containers/image` library rather than a docker dependency. It also supports running without needing to be root, which enables the rootless feature in garden. There’s a nice blog about groot here [1] and about rootless mode here [2].


[1]: https://cloudfoundry.org/blog/grootfs-container-image-management-cloud-foundry/

[2]: https://www.cloudfoundry.org/blog/route-rootless-containers/


Thanks!


Julz

Garden PM



USN-3522-2: Linux (Xenial HWE) vulnerability

Molly Crowther <mcrowther@...>
 

USN-3522-2: Linux (Xenial HWE) vulnerability

Severity

Critical

Vendor

Canonical Ubuntu

Versions Affected

  • Canonical Ubuntu 14.04

Description

USN-3522-1 fixed vulnerabilities in the Linux kernel for Ubuntu 16.04 LTS. This update provides the corresponding updates for the Linux Hardware Enablement (HWE) kernel from Ubuntu 16.04 LTS for Ubuntu 14.04 LTS.

Jann Horn discovered that microprocessors utilizing speculative execution and indirect branch prediction may allow unauthorized memory reads via sidechannel attacks. This flaw is known as Meltdown. A local attacker could use this to expose sensitive information, including kernel memory. (CVE-2017-5754)

Please Note: These stemcells address the critical vulnerability in Ubuntu associated with Meltdown. This update may include degradations to performance. The Cloud Foundry Project will be performing additional performance testing and will make updates to this notice as more information is available.

Affected Cloud Foundry Products and Versions

Severity is critical unless otherwise noted.

  • Cloud Foundry BOSH stemcells are vulnerable, including:
    • 3312.x versions prior to 3312.49
    • 3363.x versions prior to 3363.45
    • 3421.x versions prior to 3421.35
    • 3445.x versions prior to 3445.21
    • 3468.x versions prior to 3468.16
    • All other stemcells not listed.

Mitigation

OSS users are strongly encouraged to follow one of the mitigations below:

  • The Cloud Foundry project recommends upgrading the following BOSH stemcells:
    • Upgrade 3312.x versions to 3312.49
    • Upgrade 3363.x versions to 3363.45
    • Upgrade 3421.x versions to 3421.35
    • Upgrade 3445.x versions to 3445.21
    • Upgrade 3468.x versions to 3468.16
    • All other stemcells should be upgraded to the latest version available on bosh.io.

References


Failed to use http-check-type "http" for jetty application that is deployed to cloud foundry with docker #cf

Sam Dai
 
Edited

I deploy a jetty web application to cloud foundry with docker, if I set health-check type as "port" or "process", application can be started, but if I set health-check type as "http", will start app timeout. I check the result of command "cf logs app --recent", there is error "Failed to make HTTP request to '/' on port 8080: connection refused" in the log. In the Dockerfile of this application, I didn't explicitly expose a listen port.

But when I deploy application https://github.com/cloudfoundry-samples/test-app to CF as a docker image, and set health-check type as "http", can deployed and started without error.

The CF API version of cloud foundry is 2.78.0
So this issue is wired, could you help to check this issue and provide some comments?


CF CLI v6.34.0 Released Today - New push implementation, adds symlink support, deprecates some app manifest features

Koper, Dies <diesk@...>
 

The CF CLI team cut 6.34.0 today.

Deb, yum and Homebrew repos have been updated; binaries, installers and link to release notes are available at:

 

https://github.com/cloudfoundry/cli#downloads

New push implementation

In this cf CLI release, the v2-push that was exposed in cf CLI 6.33.0 has become the default push. It addresses a number of issues, adds improvements to performance and stability of the push process, and deprecates some app manifest features. This release will make push easier to maintain and enhance in the future.

 

push Fixes and Enhancements

  • push initially compares the "current" state with the "desired" state and displays this in a diff-like format to give a quick understanding of the updates it is going to make.
  • push now allows environment variables with ${...} in them in a manifest file. (#682)
  • push now preserves relative symlinks in app files. This makes it easier to work with Node.js apps using npm link, but note that it now errors when it detects a symlink pointing outside the app folder (e.g. a node_modules folder containing external symlinks). (#903)
  • push has a clarified error message when the route is not in the same space as the app. (#977)
  • The pattern format for the .cfignore exclusion file had not been well defined. To address this, push uses an external library that is compatible with git's rules for .gitignore. Folders containing only a .gitignore file are now included. (#993)
  • push creates many fewer temporary files during the package creation and upload process. This reduced push time from 21 minutes to 4 minutes in one case. (#1006)
  • push breaks up the API call to check the Cloud Controller cache for existing app files in batches to reduce the chance of timeouts. (#1123)
  • push resolves an issue when no value is specified for services in the app manifest. (#1142)
  • push resolves issues with platform-specific case sensitivity and locking of file and directory names by processing app bits in memory instead of writing them to disk. (#1147 and #1223)
  • push no longer adds new routes when updating an app and using --random-route. (#1177)
  • push resolves an issue with the generated random hostname for an app pushed with a non-ASCII app name. (#1214)
  • push has a new, smaller dictionary to generate random HTTP routes in order to avoid the use of questionable words. (Also resolves #1283)

Deprecations

  • App manifest deprecations getting a grace period:
    • For at least the next six months, when you use these features, the "old" push implementation is invoked and a deprecation message will be displayed. In this case, the fixes and enhancements of the "new" push (listed above) will not be invoked.
    • See blog post regarding app manifest changes on https://www.cloudfoundry.org/blog/coming-changes-app-manifest-simplification/ for more details. Please review your app manifests to see if they use the deprecated features.
    • push no longer supports app manifest route declarations using any of hosthostsdomaindomains or no-hostname attributes. You can use route attributes instead.
  • App manifest deprecations effective immediately (no grace period):
    • push no longer processes ${random-word} in the app manifest. We recently discovered this undocumented feature. If the intent was to create a random hostname, you can use random-route: true in the app manifest.
  • Other deprecations:
    • push does not accept conflicting flag combinations such as cf v2-push myapp --no-route --random-route.

Updated commands

  • install-plugin now displays a warning on its help page about plugins from untrusted authors.
  • install-plugin now displays the correct version of an existing plugin when installing a different version. (#946 (comment))

New & updated community plugins

See https://plugins.cloudfoundry.org/ for more information:

  • Swisscom Application Cloud v0.1.1 - The official Swisscom Application Cloud plugin gives you access to all the additional features of the App Cloud

Enjoy!

 

Regards,

Dies Koper & Jay Badenhope
Cloud Foundry Product Managers - CLI

 

 


Re: Announcing cf-deployment 1.0!

David Sabeti
 

Hi cf-dev!

Happy new year! I thought I'd remind everybody about the deprecation of cf-release: the current plan is to stop cutting new versions of cf-release at the end of this month. If you have any questions about the deprecation schedule or migration tooling, feel free to reach out to me and the Release Integration team in the #release-integration or #cf-deployment channels in the Cloud Foundry slack. We're eagerly looking for feedback.

Thanks!
David Sabeti
CF Release Integration Project Lead

On Tue, Nov 21, 2017 at 8:04 AM Adam Hevenor <ahevenor@...> wrote:
Super excited to see this now available. Congrats!


Re: Quieting a Noisy Neighbor #cf

Carlo Alberto Ferraris
 

One thing that is missing (that I mentioned in the original proposal) is the ability for operators to specify the CFS period. The default (and current hardcoded value) is 100ms. To understand why this is not ideal, consider the following scenario:

- The operator has enabled CPU maximums
- A web app instance was allocated a certain number of shares: due to the CPU maximums being enabled let's assume that this translates to the CFS quota of the app to be 20ms every 100ms (the CFS period)

Now, during normal runtime, as long as the app uses less than 20ms of CPU every 100ms to process incoming requests everything works wonderfully. Then suddenly GC kicks in: CPU usage spikes (because of the GC) and uses all the 20ms of CPU time. The result is that in addition to the GC pause, the application won't get *any* CPU time for the next 80ms, because the GC used up all the CPU time for the 100ms period. This means that incoming requests may not be serviced at all for 80ms.

This is the primary reason why I suggested to make the period operator-controllable, and why I explained that operators would be in charge of the tradeoff (lower CFS period -> better worst case latency, higher overhead; higher period -> worse worst case latency, lower overhead).

1721 - 1740 of 9374