Re: Dial tcp: i/o timeout while pushing a sample app to Cloud Foundry BOSH-Lite deployment
Giovanni Napoli
Hi Rob, first of all thank you for your answer. I tried to look into /etc/resolv.conf file and the only nameserver is 127.0.1.1 that is my hostname. This morning, trying to do the same thing and using another time my cell phone and a hotspot, seems to be the same issue. I really don't know what cloud be the problem. I'll try to switch to CLI 1.5 and see if that resolves the issue.
However, have you got any other suggestions?
However, have you got any other suggestions?
Using swift as a blobstore in cloud foundry with keystone v3
Altaf, Muhammad
Hi All,
I am trying to configure cloud foundry to use swift on OpenStack. I have followed the instructions at https://docs.cloudfoundry.org/deploying/openstack/using_swift_blobstore.html
When used keystone v2, I am able to start my apps on DEA which is good. However when using keystone V3, I am not able to start my apps. The error I am getting is:
“FAILED
Server error, status code: 400, error code: 170001, message: Staging error: failed to stage application:
Error downloading: HTTP status: 401”
Tried to debug by adding some ‘puts’ statements in openstack/core.rb file and it looks like tokens are being generated successfully so there is no problem with the authentication. The generated response to auth request shows that the user has “ResellerAdmin” role as well.
When I look into runner_z1/0 /var/vcap/data/dea_next/tmp/ app-package-download.tgz2016*, I find error saying: “401 Unauthorized: Temp URL invalid xxxxx”
/var/vcap/sys/log/dea_next/dea_next.log shows some download URLs, and if I curl those URLs, I get exact same error message. Below are the fog_connection settings in cloud foundry manifest:
fog_connection: &fog_connection
provider: 'OpenStack'
openstack_username: 'cf-admin2'
openstack_tenant: 'cf2'
openstack_project_name: 'cf2'
openstack_api_key: 'passw0rd'
openstack_auth_url: 'http://<OPENSTACK_IP>:5000/v3/auth/tokens'
openstack_domain_name: 'cf_domain'
openstack_user_domain_name: 'cf_domain'
openstack_temp_url_key: 'b3968d0207b54ece87cccc06515a89d4'
Account has a valid temp_url_key configured. Please see below:
curl -v -X GET http://SWIFT_IP:SWIFT_PORT/v2/Auth_b34a51e551ec4796a461168c886c734f -H "X-Auth-Token: TOKEN"
* Hostname was NOT found in DNS cache
* Trying SWIFT_IP...
* Connected to SWIFT_IP (SWIFT_IP) port SWIFT_PORT (#0)
< Content-Length: 0
< X-Account-Object-Count: 0
< X-Timestamp: 1457918518.21777
< X-Account-Meta-Temp-Url-Key: b3968d0207b54ece87cccc06515a89d4
< X-Account-Bytes-Used: 0
< X-Account-Container-Count: 0
< Content-Type: text/plain; charset=utf-8
< Accept-Ranges: bytes
< X-Trans-Id: txfc362c27bdda4355a942a-0056e65d93
< Date: Mon, 14 Mar 2016 06:43:31 GMT
<
* Connection #0 to host SWIFT_IP left intact
Also, I can see that the containers are created on swift, so obviously it is able to authenticate.
$ openstack container list
+---------------+
| Name |
+---------------+
| cc-buildpacks |
| cc-droplets |
| cc-packages |
| cc-resources |
+---------------+
I would appreciate if someone can help me fixing this issue.
Regards,
Muhammad Altaf
Software Development Engineer
Fujitsu Australia Software Technology Pty Ltd
14 Rodborough Road, Frenchs Forest NSW 2086, Australia
T +61 2 9452 9067 F +61 2 9975 2899
Muhammada(a)fast.au.fujitsu.com<mailto:Muhammada(a)fast.au.fujitsu.com>
fastware.com.au<http://fastware.com.au>
[cid:image001.jpg(a)01D17E18.3B68BDD0]
[cid:image002.jpg(a)01D17E18.3B68BDD0]
Disclaimer
The information in this e-mail is confidential and may contain content that is subject to copyright and/or is commercial-in-confidence and is intended only for the use of the above named addressee. If you are not the intended recipient, you are hereby notified that dissemination, copying or use of the information is strictly prohibited. If you have received this e-mail in error, please telephone Fujitsu Australia Software Technology Pty Ltd on + 61 2 9452 9000 or by reply e-mail to the sender and delete the document and all copies thereof.
Whereas Fujitsu Australia Software Technology Pty Ltd would not knowingly transmit a virus within an email communication, it is the receiver’s responsibility to scan all communication and any files attached for computer viruses and other defects. Fujitsu Australia Software Technology Pty Ltd does not accept liability for any loss or damage (whether direct, indirect, consequential or economic) however caused, and whether by negligence or otherwise, which may result directly or indirectly from this communication or any files attached.
If you do not wish to receive commercial and/or marketing email messages from Fujitsu Australia Software Technology Pty Ltd, please email unsubscribe(a)fast.au.fujitsu.com
I am trying to configure cloud foundry to use swift on OpenStack. I have followed the instructions at https://docs.cloudfoundry.org/deploying/openstack/using_swift_blobstore.html
When used keystone v2, I am able to start my apps on DEA which is good. However when using keystone V3, I am not able to start my apps. The error I am getting is:
“FAILED
Server error, status code: 400, error code: 170001, message: Staging error: failed to stage application:
Error downloading: HTTP status: 401”
Tried to debug by adding some ‘puts’ statements in openstack/core.rb file and it looks like tokens are being generated successfully so there is no problem with the authentication. The generated response to auth request shows that the user has “ResellerAdmin” role as well.
When I look into runner_z1/0 /var/vcap/data/dea_next/tmp/ app-package-download.tgz2016*, I find error saying: “401 Unauthorized: Temp URL invalid xxxxx”
/var/vcap/sys/log/dea_next/dea_next.log shows some download URLs, and if I curl those URLs, I get exact same error message. Below are the fog_connection settings in cloud foundry manifest:
fog_connection: &fog_connection
provider: 'OpenStack'
openstack_username: 'cf-admin2'
openstack_tenant: 'cf2'
openstack_project_name: 'cf2'
openstack_api_key: 'passw0rd'
openstack_auth_url: 'http://<OPENSTACK_IP>:5000/v3/auth/tokens'
openstack_domain_name: 'cf_domain'
openstack_user_domain_name: 'cf_domain'
openstack_temp_url_key: 'b3968d0207b54ece87cccc06515a89d4'
Account has a valid temp_url_key configured. Please see below:
curl -v -X GET http://SWIFT_IP:SWIFT_PORT/v2/Auth_b34a51e551ec4796a461168c886c734f -H "X-Auth-Token: TOKEN"
* Hostname was NOT found in DNS cache
* Trying SWIFT_IP...
* Connected to SWIFT_IP (SWIFT_IP) port SWIFT_PORT (#0)
GET /v2/Auth_b34a51e551ec4796a461168c886c734f HTTP/1.1< HTTP/1.1 204 No Content
User-Agent: curl/7.35.0
Host: SWIFT_IP:SWIFT_PORT
Accept: */*
X-Auth-Token: TOKEN
< Content-Length: 0
< X-Account-Object-Count: 0
< X-Timestamp: 1457918518.21777
< X-Account-Meta-Temp-Url-Key: b3968d0207b54ece87cccc06515a89d4
< X-Account-Bytes-Used: 0
< X-Account-Container-Count: 0
< Content-Type: text/plain; charset=utf-8
< Accept-Ranges: bytes
< X-Trans-Id: txfc362c27bdda4355a942a-0056e65d93
< Date: Mon, 14 Mar 2016 06:43:31 GMT
<
* Connection #0 to host SWIFT_IP left intact
Also, I can see that the containers are created on swift, so obviously it is able to authenticate.
$ openstack container list
+---------------+
| Name |
+---------------+
| cc-buildpacks |
| cc-droplets |
| cc-packages |
| cc-resources |
+---------------+
I would appreciate if someone can help me fixing this issue.
Regards,
Muhammad Altaf
Software Development Engineer
Fujitsu Australia Software Technology Pty Ltd
14 Rodborough Road, Frenchs Forest NSW 2086, Australia
T +61 2 9452 9067 F +61 2 9975 2899
Muhammada(a)fast.au.fujitsu.com<mailto:Muhammada(a)fast.au.fujitsu.com>
fastware.com.au<http://fastware.com.au>
[cid:image001.jpg(a)01D17E18.3B68BDD0]
[cid:image002.jpg(a)01D17E18.3B68BDD0]
Disclaimer
The information in this e-mail is confidential and may contain content that is subject to copyright and/or is commercial-in-confidence and is intended only for the use of the above named addressee. If you are not the intended recipient, you are hereby notified that dissemination, copying or use of the information is strictly prohibited. If you have received this e-mail in error, please telephone Fujitsu Australia Software Technology Pty Ltd on + 61 2 9452 9000 or by reply e-mail to the sender and delete the document and all copies thereof.
Whereas Fujitsu Australia Software Technology Pty Ltd would not knowingly transmit a virus within an email communication, it is the receiver’s responsibility to scan all communication and any files attached for computer viruses and other defects. Fujitsu Australia Software Technology Pty Ltd does not accept liability for any loss or damage (whether direct, indirect, consequential or economic) however caused, and whether by negligence or otherwise, which may result directly or indirectly from this communication or any files attached.
If you do not wish to receive commercial and/or marketing email messages from Fujitsu Australia Software Technology Pty Ltd, please email unsubscribe(a)fast.au.fujitsu.com
Re: cf platform upgrade with 100% uptime for apps
Gwenn Etourneau
Stephen,
Haproxy is clearly a SPOF here, that's why in production most of the people
use a load balancer with active health checking.
Thanks
toggle quoted message
Show quoted text
Haproxy is clearly a SPOF here, that's why in production most of the people
use a load balancer with active health checking.
Thanks
On Mon, Mar 14, 2016 at 3:07 PM, Kayode Odeyemi <dreyemi(a)gmail.com> wrote:
Stephen, I think this is only possible if you deployed CF unto multiple
DCs. Same configuration, multiple DNSs
On Mon, Mar 14, 2016 at 3:55 AM, Stephen Byers <smbyers(a)gmail.com> wrote:Agree. But the haproxy fronts the router so any client that is pinned to
the haproxy that is taken down will not make it to the router until its DNS
ttl is reached and it resolves to the other haproxy ip and even that may
not happen if this is a DNS round robin configuration.
I could be missing something?
On Sun, Mar 13, 2016, 8:44 PM Ben R <vagcom.ben(a)gmail.com> wrote:I think one (or two) of the routers help in this situation even if one
haproxy is out of service.
Ben
On Sun, Mar 13, 2016 at 6:32 PM, Stephen Byers <smbyers(a)gmail.com>
wrote:Will that solve the problem? BOSH will only take one haproxy out of
service at a time but those clients that resolved the DNS name to the IP of
the haproxy that is taken out of service for upgrade will be impacted,
correct?
Thanks
On Sun, Mar 13, 2016, 8:25 PM Amit Gupta <agupta(a)pivotal.io> wrote:In your case, 2 HAProxys with DNS configured to point at both.
Re: cf platform upgrade with 100% uptime for apps
Paul Bakare
Stephen, I think this is only possible if you deployed CF unto multiple
DCs. Same configuration, multiple DNSs
toggle quoted message
Show quoted text
DCs. Same configuration, multiple DNSs
On Mon, Mar 14, 2016 at 3:55 AM, Stephen Byers <smbyers(a)gmail.com> wrote:
Agree. But the haproxy fronts the router so any client that is pinned to
the haproxy that is taken down will not make it to the router until its DNS
ttl is reached and it resolves to the other haproxy ip and even that may
not happen if this is a DNS round robin configuration.
I could be missing something?
On Sun, Mar 13, 2016, 8:44 PM Ben R <vagcom.ben(a)gmail.com> wrote:I think one (or two) of the routers help in this situation even if one
haproxy is out of service.
Ben
On Sun, Mar 13, 2016 at 6:32 PM, Stephen Byers <smbyers(a)gmail.com> wrote:Will that solve the problem? BOSH will only take one haproxy out of
service at a time but those clients that resolved the DNS name to the IP of
the haproxy that is taken out of service for upgrade will be impacted,
correct?
Thanks
On Sun, Mar 13, 2016, 8:25 PM Amit Gupta <agupta(a)pivotal.io> wrote:In your case, 2 HAProxys with DNS configured to point at both.
Re: Can resources of a IDLE application be shared by others?
Stanley Shen <meteorping@...>
Thanks for information.
I said "pre-allocated" because after I pushed an APP with 5G memory specified, if I go to Cell VM, and I notice the available memory is 5G less than totally memory via "curl -s http://localhost:1800/state" on Cell VM.
I think overcommit factor is not very suitable in my case, but "resource reclamation and predictive analytics" are quite helpful, and it's a quite useful/flexible mechanism.
Do we have any plan on such cool ideas?
toggle quoted message
Show quoted text
I said "pre-allocated" because after I pushed an APP with 5G memory specified, if I go to Cell VM, and I notice the available memory is 5G less than totally memory via "curl -s http://localhost:1800/state" on Cell VM.
I think overcommit factor is not very suitable in my case, but "resource reclamation and predictive analytics" are quite helpful, and it's a quite useful/flexible mechanism.
Do we have any plan on such cool ideas?
Hi Stanley,
No physical memory is actually pre-allocated, it's simply a maximum used to
determine if the container needs to be killed when it exceeds it. However,
since your VM has some fixed amount of physical memory (e.g. 7.5G), the
operator will want to be able to make some guarantees that the VM doesn't
run a bunch of apps that consume the entire physical memory even if the
apps don't individually exceed their maximum memory limit. This is
especially important in a multi-tenant scenario.
One mechanism to deal with this is an "over-commit factor". This is what
Dan Mikusa's link was about in case you didn't read it yet. If you want
absolute guarantees that the VM will only have work scheduled on it such
that applications cannot consume more memory than what's "guaranteed" to
them by whatever their max memory limits are set to, you'll want an
overcommit factor on memory of 1. An overcommit factor of 2 means that on
a 7.5G VM, you could allocate containers whose sum total of their max
memory limits was up to 15G, and you'd be fine as long as you can trust the
containers to not consume, in total, more than 7.5G of real memory.
The DEA architecture supports setting the overcommit factors, I'm not sure
whether Diego supports this (yet).
The two concepts Deepak brings up, resource reclamation and predictive
analytics, are both pretty cool ideas. But these are not currently
supported in Cloud Foundry.
Best,
Amit
On Thu, Mar 10, 2016 at 7:54 AM, Stanley Shen <meteorping(a)gmail.com> wrote:
Re: cf platform upgrade with 100% uptime for apps
Stephen Byers <smbyers@...>
Agree. But the haproxy fronts the router so any client that is pinned to
the haproxy that is taken down will not make it to the router until its DNS
ttl is reached and it resolves to the other haproxy ip and even that may
not happen if this is a DNS round robin configuration.
I could be missing something?
toggle quoted message
Show quoted text
the haproxy that is taken down will not make it to the router until its DNS
ttl is reached and it resolves to the other haproxy ip and even that may
not happen if this is a DNS round robin configuration.
I could be missing something?
On Sun, Mar 13, 2016, 8:44 PM Ben R <vagcom.ben(a)gmail.com> wrote:
I think one (or two) of the routers help in this situation even if one
haproxy is out of service.
Ben
On Sun, Mar 13, 2016 at 6:32 PM, Stephen Byers <smbyers(a)gmail.com> wrote:Will that solve the problem? BOSH will only take one haproxy out of
service at a time but those clients that resolved the DNS name to the IP of
the haproxy that is taken out of service for upgrade will be impacted,
correct?
Thanks
On Sun, Mar 13, 2016, 8:25 PM Amit Gupta <agupta(a)pivotal.io> wrote:In your case, 2 HAProxys with DNS configured to point at both.
Re: cf platform upgrade with 100% uptime for apps
Vik R <vagcom.ben@...>
I think one (or two) of the routers help in this situation even if one
haproxy is out of service.
Ben
toggle quoted message
Show quoted text
haproxy is out of service.
Ben
On Sun, Mar 13, 2016 at 6:32 PM, Stephen Byers <smbyers(a)gmail.com> wrote:
Will that solve the problem? BOSH will only take one haproxy out of
service at a time but those clients that resolved the DNS name to the IP of
the haproxy that is taken out of service for upgrade will be impacted,
correct?
Thanks
On Sun, Mar 13, 2016, 8:25 PM Amit Gupta <agupta(a)pivotal.io> wrote:In your case, 2 HAProxys with DNS configured to point at both.
Re: cf platform upgrade with 100% uptime for apps
Stephen Byers <smbyers@...>
Will that solve the problem? BOSH will only take one haproxy out of service
at a time but those clients that resolved the DNS name to the IP of the
haproxy that is taken out of service for upgrade will be impacted, correct?
Thanks
toggle quoted message
Show quoted text
at a time but those clients that resolved the DNS name to the IP of the
haproxy that is taken out of service for upgrade will be impacted, correct?
Thanks
On Sun, Mar 13, 2016, 8:25 PM Amit Gupta <agupta(a)pivotal.io> wrote:
In your case, 2 HAProxys with DNS configured to point at both.
Re: cf platform upgrade with 100% uptime for apps
Amit Kumar Gupta
In your case, 2 HAProxys with DNS configured to point at both.
toggle quoted message
Show quoted text
On Sunday, March 13, 2016, Ben R <vagcom.ben(a)gmail.com> wrote:
Is it possible to upgrade cf platform with 100% uptime for apps.
Let me give you an scenario of the platform:
1 haproxy
2 instances for gorouter, health manager
3 VMs for dea instances
1 instance for cloud controller ng, postgres, uaa etc.
1 for nfs server
1 for cc worker
1 for doppler
1 for logtraffic controller
My answer is no, because when haproxy is down, you can't connect to the
apps.
What is the right strategy for 100% uptime for apps?
Ben
cf platform upgrade with 100% uptime for apps
Vik R <vagcom.ben@...>
Is it possible to upgrade cf platform with 100% uptime for apps.
Let me give you an scenario of the platform:
1 haproxy
2 instances for gorouter, health manager
3 VMs for dea instances
1 instance for cloud controller ng, postgres, uaa etc.
1 for nfs server
1 for cc worker
1 for doppler
1 for logtraffic controller
My answer is no, because when haproxy is down, you can't connect to the
apps.
What is the right strategy for 100% uptime for apps?
Ben
Let me give you an scenario of the platform:
1 haproxy
2 instances for gorouter, health manager
3 VMs for dea instances
1 instance for cloud controller ng, postgres, uaa etc.
1 for nfs server
1 for cc worker
1 for doppler
1 for logtraffic controller
My answer is no, because when haproxy is down, you can't connect to the
apps.
What is the right strategy for 100% uptime for apps?
Ben
Re: Update Parallelization in Cloud Foundry
Amit Kumar Gupta
If by "hard dependency" you mean something that has to be up strictly
before another thing for a deploy to possibly succeed, I'm not sure if
there are any such hard dependencies. PCFDev (formerly MicroPCF) brings up
all the components simultaneously on a single VM [1
<https://github.com/pivotal-cf/micropcf>]. Some processes will flap until
other ones are up, but they eventually do all come up.
There probably isn't a single solution to minimizing update time while
guaranteeing 100% uptime, as the answer will depend on a lot of different
things. Are you running DEA and/or Diego? External database and/or
external blobstore? Are you just talking about uptime of apps, or also of
the platform API? What about services as well?
If you find a colocation/update strategy that works for you, I think the
community would really appreciate hearing about it.
(Just for fun, there's also nanocf [2 <https://github.com/sclevine/nanocf>]
which is a Docker image with all of CF in it, and a bunch of videos where I
run nanocf in nanocf in BOSH-Lite CF [3
<https://www.youtube.com/watch?v=oMUGjaWg_Hk&list=PLdgSOpBLY_uFbzo1f1prmjW0hf4z1rWdm>
])
[1] https://github.com/pivotal-cf/micropcf
[2] https://github.com/sclevine/nanocf
[3]
https://www.youtube.com/watch?v=oMUGjaWg_Hk&list=PLdgSOpBLY_uFbzo1f1prmjW0hf4z1rWdm
Cheers,
Amit
toggle quoted message
Show quoted text
before another thing for a deploy to possibly succeed, I'm not sure if
there are any such hard dependencies. PCFDev (formerly MicroPCF) brings up
all the components simultaneously on a single VM [1
<https://github.com/pivotal-cf/micropcf>]. Some processes will flap until
other ones are up, but they eventually do all come up.
There probably isn't a single solution to minimizing update time while
guaranteeing 100% uptime, as the answer will depend on a lot of different
things. Are you running DEA and/or Diego? External database and/or
external blobstore? Are you just talking about uptime of apps, or also of
the platform API? What about services as well?
If you find a colocation/update strategy that works for you, I think the
community would really appreciate hearing about it.
(Just for fun, there's also nanocf [2 <https://github.com/sclevine/nanocf>]
which is a Docker image with all of CF in it, and a bunch of videos where I
run nanocf in nanocf in BOSH-Lite CF [3
<https://www.youtube.com/watch?v=oMUGjaWg_Hk&list=PLdgSOpBLY_uFbzo1f1prmjW0hf4z1rWdm>
])
[1] https://github.com/pivotal-cf/micropcf
[2] https://github.com/sclevine/nanocf
[3]
https://www.youtube.com/watch?v=oMUGjaWg_Hk&list=PLdgSOpBLY_uFbzo1f1prmjW0hf4z1rWdm
Cheers,
Amit
On Thu, Mar 10, 2016 at 2:24 AM, Omar Elazhary <omazhary(a)gmail.com> wrote:
Thanks everyone. What I understood from Amit's response is that I can
parallelize certain components. What I also understood from both Amit's and
Dieu's responses is that some components have hard dependencies, while
others only have soft ones, and some components have no dependencies at
all. My question is: how can I figure out these dependencies? Are they
listed somewhere? The cloud foundry docs do a great job of describing each
component separately, but they do not explain which should be up before
which. That is what I need in order to work an execution plan in order to
minimize update time, all the while keeping CF 100% available.
Thanks.
Regards,
Omar
Re: Can resources of a IDLE application be shared by others?
Amit Kumar Gupta
Hi Stanley,
No physical memory is actually pre-allocated, it's simply a maximum used to
determine if the container needs to be killed when it exceeds it. However,
since your VM has some fixed amount of physical memory (e.g. 7.5G), the
operator will want to be able to make some guarantees that the VM doesn't
run a bunch of apps that consume the entire physical memory even if the
apps don't individually exceed their maximum memory limit. This is
especially important in a multi-tenant scenario.
One mechanism to deal with this is an "over-commit factor". This is what
Dan Mikusa's link was about in case you didn't read it yet. If you want
absolute guarantees that the VM will only have work scheduled on it such
that applications cannot consume more memory than what's "guaranteed" to
them by whatever their max memory limits are set to, you'll want an
overcommit factor on memory of 1. An overcommit factor of 2 means that on
a 7.5G VM, you could allocate containers whose sum total of their max
memory limits was up to 15G, and you'd be fine as long as you can trust the
containers to not consume, in total, more than 7.5G of real memory.
The DEA architecture supports setting the overcommit factors, I'm not sure
whether Diego supports this (yet).
The two concepts Deepak brings up, resource reclamation and predictive
analytics, are both pretty cool ideas. But these are not currently
supported in Cloud Foundry.
Best,
Amit
toggle quoted message
Show quoted text
No physical memory is actually pre-allocated, it's simply a maximum used to
determine if the container needs to be killed when it exceeds it. However,
since your VM has some fixed amount of physical memory (e.g. 7.5G), the
operator will want to be able to make some guarantees that the VM doesn't
run a bunch of apps that consume the entire physical memory even if the
apps don't individually exceed their maximum memory limit. This is
especially important in a multi-tenant scenario.
One mechanism to deal with this is an "over-commit factor". This is what
Dan Mikusa's link was about in case you didn't read it yet. If you want
absolute guarantees that the VM will only have work scheduled on it such
that applications cannot consume more memory than what's "guaranteed" to
them by whatever their max memory limits are set to, you'll want an
overcommit factor on memory of 1. An overcommit factor of 2 means that on
a 7.5G VM, you could allocate containers whose sum total of their max
memory limits was up to 15G, and you'd be fine as long as you can trust the
containers to not consume, in total, more than 7.5G of real memory.
The DEA architecture supports setting the overcommit factors, I'm not sure
whether Diego supports this (yet).
The two concepts Deepak brings up, resource reclamation and predictive
analytics, are both pretty cool ideas. But these are not currently
supported in Cloud Foundry.
Best,
Amit
On Thu, Mar 10, 2016 at 7:54 AM, Stanley Shen <meteorping(a)gmail.com> wrote:
Yes, it's one way but it's not flexible, and scale app need to restart the
app as well.
As I said I may have some heavy operations which will definitely need more
than 2G.
In my opinion the ideal way is that we just set a maximum value for each
process, but during the running of the process, we don't pre-allocate the
memory as we specify as the maximum in deployment.I suggest you manually “cf scale -m 2G“ after your app has booted.écrit :
Type “cf scale --help” for more info.Le 9 mars 2016 à 04:09, Stanley Shen <meteorping(a)gmail.com> alimitation.
Hello, all
When pushing an application to CF, we need to define its disk/memoryneeded in thisThe memory limitation is just the possible maximum value will beapplication, but in most time, we don't need so much memory.startup someFor example, I have one application which needs at most 5G memory atsome specific operation, but in most time it just needs 2G.memory isSo right now I need to specify 5G in deployment manifest, and 5Gallocated.should can push more
Take m3.large VM for example, it has 7.5G.
Right now we can only push one application on it, but ideally weapplications, like 3 since only 2G is needed for each application.applications?
Can the resources of a IDLE application be shared by otherapplication, itIt seems right now all the resources are pre-allocated when pushingwill not be released even I stopped the application.
Re: Domain change for CF212 -> how to change the domain for a service broker correctly?
Amit Kumar Gupta
If you changed the app domain in your deployment manifest, it doesn't
delete the old shared domain (since other apps might still be using that
domain), it actually just adds a new shared app domain. If your broker is
running as an app on the platform, it's still bound to the old route using
the old domain. You need to bind a new route with a new domain to it, then
redoing your update-service-broker should work.
On Fri, Mar 11, 2016 at 7:52 AM, Rafal Radecki <radecki.rafal(a)gmail.com>
wrote:
delete the old shared domain (since other apps might still be using that
domain), it actually just adds a new shared app domain. If your broker is
running as an app on the platform, it's still bound to the old route using
the old domain. You need to bind a new route with a new domain to it, then
redoing your update-service-broker should work.
On Fri, Mar 11, 2016 at 7:52 AM, Rafal Radecki <radecki.rafal(a)gmail.com>
wrote:
Hi.
I am in process of changing the domain name for a service broker. I
managed to redeploy CF with updated deployment manifest and all vms which
form the deployment are in running state. I can login to the new endpoint
and list apps, service brokers, etc. I am not able to update the service
brokers though:
$ cf service-brokers | grep broker01
broker01 http://broker01-broker.old_domain
$ cf update-service-broker broker01 user pass
http://broker01-broker.new_domain
Updating service broker broker01 as admin...
FAILED
Server error, status code: 502, error code: 10001, message: The service
broker rejected the request to
http://broker01-broker.new_domain/v2/catalog. Status Code: 404 Not Found,
Body: 404 Not Found: Requested route ('broker01-broker.new_domain') does
not exist.
What is the correct way to update the domain for them?
BR,
Rafal.
Re: User defined variable "key" validation doesn't happen at cf set-env phase
Padmashree B
Hi Nick,
Thanks for the clarification!
But as a developer I would expect the restart/restage of the application fails if the environment variables is invalid.
However, this is not the case always - if the var name has special characters such as @$ etc., it fails to restart, the user can then trouble-shoot to find the issue.
But in cases where the var name has . or -, the application restarts/restages successfully. The app logs, however, contains ERR message
ERR /bin/bash: line 17: export: `test-dash=testing special chars': not a valid identifier
At runtime, these invalid variables are not accessible by the application.
As a developer, I would expect, the application fails at an early stage during restart.
Kind Regards,
Padma
Thanks for the clarification!
But as a developer I would expect the restart/restage of the application fails if the environment variables is invalid.
However, this is not the case always - if the var name has special characters such as @$ etc., it fails to restart, the user can then trouble-shoot to find the issue.
But in cases where the var name has . or -, the application restarts/restages successfully. The app logs, however, contains ERR message
ERR /bin/bash: line 17: export: `test-dash=testing special chars': not a valid identifier
At runtime, these invalid variables are not accessible by the application.
As a developer, I would expect, the application fails at an early stage during restart.
Kind Regards,
Padma
CF 231 Diego?
Austin Chen
Hi everyone,
I'm new to learning and understanding the CF architecture, and I realized when I downloaded CF 231, there didn't seem to be any Diego architecture as listed in the documentation online. Rather, it was the pre-Diego architecture. I was wondering if there was even a version of an open source CF released with Diego? As well as, whether or not CF 231+ versions would be switching to Diego architecture.
The documentation does not exactly follow how CF 231 works, and seems more up-to-date than the actual version itself. Could anyone clarify to a beginner like me what is going on?
Thanks
I'm new to learning and understanding the CF architecture, and I realized when I downloaded CF 231, there didn't seem to be any Diego architecture as listed in the documentation online. Rather, it was the pre-Diego architecture. I was wondering if there was even a version of an open source CF released with Diego? As well as, whether or not CF 231+ versions would be switching to Diego architecture.
The documentation does not exactly follow how CF 231 works, and seems more up-to-date than the actual version itself. Could anyone clarify to a beginner like me what is going on?
Thanks
Proposal for new OAuth grant type in UAA
Vineet Banga <vineetbanga1@...>
This is a proposal to add a new OAuth grant type in UAA to support a stronger authentication model. The proposal lists the two potential implementations for the same grant type. We have chosen to go with Option #1 in our implementation, but I wanted to share the proposal and get feedback on it. You can view and comment on the proposal here:
https://docs.google.com/document/d/1_wZMVk-Ir9WAjin606CGaWehAjj3MX1ytS00bXlK2gs/edit?usp=sharing
Thanks
Vineet Banga
https://docs.google.com/document/d/1_wZMVk-Ir9WAjin606CGaWehAjj3MX1ytS00bXlK2gs/edit?usp=sharing
Thanks
Vineet Banga
Re: Announcing the Cloud Foundry Java Client 2.0.0.M1
Mike Youngstrom <youngm@...>
Nice work! This looks like an excellent client library. I'm glad it
supports v2 and v3 apis.
Any thoughts or plans for producing uaa and loggregator/firehose clients as
well? Perhaps as separate modules? I see limited uaa auth and limited
loggregator support in cloudfoundry-client.
I wonder if we could get more componentization in the client library by
renaming "cloudfoundry-client" to "cloud-controller-client" and adding a
"uaa-client (making it fully featured eventually)" and "loggregator-client"
both probably included in "cloudfoundry-operations"
Thoughts?
Mike
toggle quoted message
Show quoted text
supports v2 and v3 apis.
Any thoughts or plans for producing uaa and loggregator/firehose clients as
well? Perhaps as separate modules? I see limited uaa auth and limited
loggregator support in cloudfoundry-client.
I wonder if we could get more componentization in the client library by
renaming "cloudfoundry-client" to "cloud-controller-client" and adding a
"uaa-client (making it fully featured eventually)" and "loggregator-client"
both probably included in "cloudfoundry-operations"
Thoughts?
Mike
On Fri, Mar 11, 2016 at 2:36 PM, Ben Hale <bhale(a)pivotal.io> wrote:
As some of you may know, the Cloud Foundry Java Client has gone through
various levels of neglect over the past couple of years. Towards the end
of last year, my team started working on the project with the goal of
making it a piece of software that we were not only proud of, but that we
could build towards the future with. With that in mind, I’m exceedingly
pleased to announce our 2.0.0.M1 release.
We’ve taken the opportunity of this major release to reset what the
project is:
* What was a once hodgepodge of Java APIs both mapping directly onto the
REST APIs and onto higher-level abstractions is now two clearly delineated
APIs. We expose a `-client` API mapping to the REST calls and an
`-operations` API mapping to the higher-level abstractions that roughly
match the CLI.
* What once was an implementation of a subset of the Cloud Foundry APIs is
now a target of implementing every single REST call exposed by any Cloud
Foundry component (nearly 500 individual URIs across 4 components)
* What was once a co-mingled interface and Spring-based implementation is
now an airtight separation between the two allowing alternate
implementations (addressing one of the largest complaints about the
previous generation)
* Finally, we’ve chosen to make the API reactive, building on top of
Project Reactor, but interoperable with any Reactive Streams compatible
library
Obviously, the biggest change in this list is the move to a reactive API.
This decision was not take lightly. In fact our original V2 implementation
was imperative following the pattern of the V1 effort. However, after
consulting with both internal and external users, we found that many teams
were viewing “blocking” APIs as a serious issue as they implemented their
high-performance micro-service architectures.
As an example, we worked very deeply with a team right at the beginning as
they were creating a new Cloud Foundry Routing Service. Since each HTTP
request into their system went though this service, performance was a
primary concern and they were finding that the blocking bit of their
implementation (Java Client V1) was the biggest hit for them. We’ve
mitigated a lot of the performance bottle neck with what we’ve got today,
but for M2 we’re planning on removing that last blocking component
completely and moving to a full non-blocking network stack. This isn’t an
isolated use case either, we’ve been seeing a lot of this theme;
micro-service architectures require throughput that can’t be achieved
without either a non-blocking stack or “massive” horizontal scaling. Most
companies would prefer the former simply due to cost.
As a general rule you can make a reactive api blocking (just tack `.get()`
onto the end of any Reactor flow) but cannot make a blocking API
non-blocking (see the insanity we do to fake it, with non-optimal results,
on RestTemplate[1] today). So since we had a strong requirement to support
this non-blocking design we figured that going reactive-first was the most
flexible design we could choose.
If you want to get started with this new version, I’m sad to say that
we’re a bit lacking in the “on boarding experience” at the moment. We
don’t have examples or a user-guide, but the repository’s README[2] is a
good place to start. As you progress deeper into using the client, you can
probably piece something together from the Javadocs[3] and the Cloud
Foundry API[4] documentation. Finally, the best examples are found in our
integration tests[5]. Improving this experience is something we’re quite
sensitive to, so you can expect significant improvements here.
The reason that we’re laying this foundation is you. We’re already seeing
customers adopting (and contributing back to!) the project, but we’ve
really done it to accelerate the entire Cloud Foundry ecosystem. If you
need to interact with Cloud Foundry, I want you to be using the Java
Client. If you find that it’s not the best way for you to get your work
done, I want you to tell me, loudly and often. We’re also excited about
being in the vanguard of reactive APIs within the Java ecosystem. Having
recently experienced it, I’m sure that this transition will not be trivial,
but I am sure that it’ll be worthwhile.
A special thanks goes out to Scott Fredrick (Pivotal) for nursing the
project along far enough for us to take over, Benjamin Einaudi (Orange
Telecom) for his constant submissions, and of course Chris Frost (Pivotal),
Glyn Normington (Pivotal), Paul Harris (Pivotal), and Steve Powell
(Pivotal) for doing so much of the hard work.
-Ben Hale
Cloud Foundry Java Experience
[1]:
https://github.com/cloudfoundry/cf-java-client/blob/c35c20463fab0e7730bf807af9e84ac186cdb3c2/cloudfoundry-client-spring/src/main/lombok/org/cloudfoundry/spring/util/AbstractSpringOperations.java#L73-L127
[2]: https://github.com/cloudfoundry/cf-java-client
[3]: https://github.com/cloudfoundry/cf-java-client#documentation
[4]: https://apidocs.cloudfoundry.org/latest-release/
[5]:
https://github.com/cloudfoundry/cf-java-client/blob/master/integration-test/src/test/java/org/cloudfoundry/operations/RoutesTest.java#L114-L135
Announcing the Cloud Foundry Java Client 2.0.0.M1
Ben Hale <bhale@...>
As some of you may know, the Cloud Foundry Java Client has gone through various levels of neglect over the past couple of years. Towards the end of last year, my team started working on the project with the goal of making it a piece of software that we were not only proud of, but that we could build towards the future with. With that in mind, I’m exceedingly pleased to announce our 2.0.0.M1 release.
We’ve taken the opportunity of this major release to reset what the project is:
* What was a once hodgepodge of Java APIs both mapping directly onto the REST APIs and onto higher-level abstractions is now two clearly delineated APIs. We expose a `-client` API mapping to the REST calls and an `-operations` API mapping to the higher-level abstractions that roughly match the CLI.
* What once was an implementation of a subset of the Cloud Foundry APIs is now a target of implementing every single REST call exposed by any Cloud Foundry component (nearly 500 individual URIs across 4 components)
* What was once a co-mingled interface and Spring-based implementation is now an airtight separation between the two allowing alternate implementations (addressing one of the largest complaints about the previous generation)
* Finally, we’ve chosen to make the API reactive, building on top of Project Reactor, but interoperable with any Reactive Streams compatible library
Obviously, the biggest change in this list is the move to a reactive API. This decision was not take lightly. In fact our original V2 implementation was imperative following the pattern of the V1 effort. However, after consulting with both internal and external users, we found that many teams were viewing “blocking” APIs as a serious issue as they implemented their high-performance micro-service architectures.
As an example, we worked very deeply with a team right at the beginning as they were creating a new Cloud Foundry Routing Service. Since each HTTP request into their system went though this service, performance was a primary concern and they were finding that the blocking bit of their implementation (Java Client V1) was the biggest hit for them. We’ve mitigated a lot of the performance bottle neck with what we’ve got today, but for M2 we’re planning on removing that last blocking component completely and moving to a full non-blocking network stack. This isn’t an isolated use case either, we’ve been seeing a lot of this theme; micro-service architectures require throughput that can’t be achieved without either a non-blocking stack or “massive” horizontal scaling. Most companies would prefer the former simply due to cost.
As a general rule you can make a reactive api blocking (just tack `.get()` onto the end of any Reactor flow) but cannot make a blocking API non-blocking (see the insanity we do to fake it, with non-optimal results, on RestTemplate[1] today). So since we had a strong requirement to support this non-blocking design we figured that going reactive-first was the most flexible design we could choose.
If you want to get started with this new version, I’m sad to say that we’re a bit lacking in the “on boarding experience” at the moment. We don’t have examples or a user-guide, but the repository’s README[2] is a good place to start. As you progress deeper into using the client, you can probably piece something together from the Javadocs[3] and the Cloud Foundry API[4] documentation. Finally, the best examples are found in our integration tests[5]. Improving this experience is something we’re quite sensitive to, so you can expect significant improvements here.
The reason that we’re laying this foundation is you. We’re already seeing customers adopting (and contributing back to!) the project, but we’ve really done it to accelerate the entire Cloud Foundry ecosystem. If you need to interact with Cloud Foundry, I want you to be using the Java Client. If you find that it’s not the best way for you to get your work done, I want you to tell me, loudly and often. We’re also excited about being in the vanguard of reactive APIs within the Java ecosystem. Having recently experienced it, I’m sure that this transition will not be trivial, but I am sure that it’ll be worthwhile.
A special thanks goes out to Scott Fredrick (Pivotal) for nursing the project along far enough for us to take over, Benjamin Einaudi (Orange Telecom) for his constant submissions, and of course Chris Frost (Pivotal), Glyn Normington (Pivotal), Paul Harris (Pivotal), and Steve Powell (Pivotal) for doing so much of the hard work.
-Ben Hale
Cloud Foundry Java Experience
[1]: https://github.com/cloudfoundry/cf-java-client/blob/c35c20463fab0e7730bf807af9e84ac186cdb3c2/cloudfoundry-client-spring/src/main/lombok/org/cloudfoundry/spring/util/AbstractSpringOperations.java#L73-L127
[2]: https://github.com/cloudfoundry/cf-java-client
[3]: https://github.com/cloudfoundry/cf-java-client#documentation
[4]: https://apidocs.cloudfoundry.org/latest-release/
[5]: https://github.com/cloudfoundry/cf-java-client/blob/master/integration-test/src/test/java/org/cloudfoundry/operations/RoutesTest.java#L114-L135
We’ve taken the opportunity of this major release to reset what the project is:
* What was a once hodgepodge of Java APIs both mapping directly onto the REST APIs and onto higher-level abstractions is now two clearly delineated APIs. We expose a `-client` API mapping to the REST calls and an `-operations` API mapping to the higher-level abstractions that roughly match the CLI.
* What once was an implementation of a subset of the Cloud Foundry APIs is now a target of implementing every single REST call exposed by any Cloud Foundry component (nearly 500 individual URIs across 4 components)
* What was once a co-mingled interface and Spring-based implementation is now an airtight separation between the two allowing alternate implementations (addressing one of the largest complaints about the previous generation)
* Finally, we’ve chosen to make the API reactive, building on top of Project Reactor, but interoperable with any Reactive Streams compatible library
Obviously, the biggest change in this list is the move to a reactive API. This decision was not take lightly. In fact our original V2 implementation was imperative following the pattern of the V1 effort. However, after consulting with both internal and external users, we found that many teams were viewing “blocking” APIs as a serious issue as they implemented their high-performance micro-service architectures.
As an example, we worked very deeply with a team right at the beginning as they were creating a new Cloud Foundry Routing Service. Since each HTTP request into their system went though this service, performance was a primary concern and they were finding that the blocking bit of their implementation (Java Client V1) was the biggest hit for them. We’ve mitigated a lot of the performance bottle neck with what we’ve got today, but for M2 we’re planning on removing that last blocking component completely and moving to a full non-blocking network stack. This isn’t an isolated use case either, we’ve been seeing a lot of this theme; micro-service architectures require throughput that can’t be achieved without either a non-blocking stack or “massive” horizontal scaling. Most companies would prefer the former simply due to cost.
As a general rule you can make a reactive api blocking (just tack `.get()` onto the end of any Reactor flow) but cannot make a blocking API non-blocking (see the insanity we do to fake it, with non-optimal results, on RestTemplate[1] today). So since we had a strong requirement to support this non-blocking design we figured that going reactive-first was the most flexible design we could choose.
If you want to get started with this new version, I’m sad to say that we’re a bit lacking in the “on boarding experience” at the moment. We don’t have examples or a user-guide, but the repository’s README[2] is a good place to start. As you progress deeper into using the client, you can probably piece something together from the Javadocs[3] and the Cloud Foundry API[4] documentation. Finally, the best examples are found in our integration tests[5]. Improving this experience is something we’re quite sensitive to, so you can expect significant improvements here.
The reason that we’re laying this foundation is you. We’re already seeing customers adopting (and contributing back to!) the project, but we’ve really done it to accelerate the entire Cloud Foundry ecosystem. If you need to interact with Cloud Foundry, I want you to be using the Java Client. If you find that it’s not the best way for you to get your work done, I want you to tell me, loudly and often. We’re also excited about being in the vanguard of reactive APIs within the Java ecosystem. Having recently experienced it, I’m sure that this transition will not be trivial, but I am sure that it’ll be worthwhile.
A special thanks goes out to Scott Fredrick (Pivotal) for nursing the project along far enough for us to take over, Benjamin Einaudi (Orange Telecom) for his constant submissions, and of course Chris Frost (Pivotal), Glyn Normington (Pivotal), Paul Harris (Pivotal), and Steve Powell (Pivotal) for doing so much of the hard work.
-Ben Hale
Cloud Foundry Java Experience
[1]: https://github.com/cloudfoundry/cf-java-client/blob/c35c20463fab0e7730bf807af9e84ac186cdb3c2/cloudfoundry-client-spring/src/main/lombok/org/cloudfoundry/spring/util/AbstractSpringOperations.java#L73-L127
[2]: https://github.com/cloudfoundry/cf-java-client
[3]: https://github.com/cloudfoundry/cf-java-client#documentation
[4]: https://apidocs.cloudfoundry.org/latest-release/
[5]: https://github.com/cloudfoundry/cf-java-client/blob/master/integration-test/src/test/java/org/cloudfoundry/operations/RoutesTest.java#L114-L135
Re: DEA Chargeback w/ overcommit
Mike Youngstrom <youngm@...>
We heavy over commit our DEAs (like 4x) and we charge the customer the
memory they've requested. But we also ensure our DEAs have in total some
percentage of free memory on the DEAs just in case. So, we just charge our
customers something close to that amount more than raw RAM costs so they
share the cost of the overhead. This cost shrinks as the deployment gets
utilized more.
Mike
toggle quoted message
Show quoted text
memory they've requested. But we also ensure our DEAs have in total some
percentage of free memory on the DEAs just in case. So, we just charge our
customers something close to that amount more than raw RAM costs so they
share the cost of the overhead. This cost shrinks as the deployment gets
utilized more.
Mike
On Fri, Mar 11, 2016 at 12:26 PM, John Wong <gokoproject(a)gmail.com> wrote:
Hi
Given a DEA with 15GB, overcommit factor = 2, total "memory" is 30GB.
Ideally we can push up to 30 app instances per host, if each app instance
requires 1GB mem allocation.
Supposed the environment has 3 DEAs (max = 90GB) and we need to place a
total of 40GB of app instances:
1. should I kill the 3rd DEA given I still have "20GB" left and provision
the 3rd one when I am about to run low?
2. do you consider overcommit factor in your chargeback? i.e. despite you
can get up to 30GB, you charge customer the physical RAM (15). In this
case, you still charge the customer
n* box_price * (percentage of mem consumption / total physical
memory) = 3 * box_price * (40/45) ?
3. would I actually see "unavailable stager" error even with overcommit,
for a 40/90 deployment?
Thanks.... I hope these questions make sense.
John
DEA Chargeback w/ overcommit
John Wong
Hi
Given a DEA with 15GB, overcommit factor = 2, total "memory" is 30GB.
Ideally we can push up to 30 app instances per host, if each app instance
requires 1GB mem allocation.
Supposed the environment has 3 DEAs (max = 90GB) and we need to place a
total of 40GB of app instances:
1. should I kill the 3rd DEA given I still have "20GB" left and provision
the 3rd one when I am about to run low?
2. do you consider overcommit factor in your chargeback? i.e. despite you
can get up to 30GB, you charge customer the physical RAM (15). In this
case, you still charge the customer
n* box_price * (percentage of mem consumption / total physical
memory) = 3 * box_price * (40/45) ?
3. would I actually see "unavailable stager" error even with overcommit,
for a 40/90 deployment?
Thanks.... I hope these questions make sense.
John
Given a DEA with 15GB, overcommit factor = 2, total "memory" is 30GB.
Ideally we can push up to 30 app instances per host, if each app instance
requires 1GB mem allocation.
Supposed the environment has 3 DEAs (max = 90GB) and we need to place a
total of 40GB of app instances:
1. should I kill the 3rd DEA given I still have "20GB" left and provision
the 3rd one when I am about to run low?
2. do you consider overcommit factor in your chargeback? i.e. despite you
can get up to 30GB, you charge customer the physical RAM (15). In this
case, you still charge the customer
n* box_price * (percentage of mem consumption / total physical
memory) = 3 * box_price * (40/45) ?
3. would I actually see "unavailable stager" error even with overcommit,
for a 40/90 deployment?
Thanks.... I hope these questions make sense.
John