Re: Update Parallelization in Cloud Foundry
toggle quoted messageShow quoted text
On Thu, Mar 10, 2016 at 2:24 AM, Omar Elazhary <omazhary(a)gmail.com> wrote: Thanks everyone. What I understood from Amit's response is that I can parallelize certain components. What I also understood from both Amit's and Dieu's responses is that some components have hard dependencies, while others only have soft ones, and some components have no dependencies at all. My question is: how can I figure out these dependencies? Are they listed somewhere? The cloud foundry docs do a great job of describing each component separately, but they do not explain which should be up before which. That is what I need in order to work an execution plan in order to minimize update time, all the while keeping CF 100% available.
Thanks.
Regards, Omar
|
|
Re: Can resources of a IDLE application be shared by others?
Hi Stanley,
No physical memory is actually pre-allocated, it's simply a maximum used to determine if the container needs to be killed when it exceeds it. However, since your VM has some fixed amount of physical memory (e.g. 7.5G), the operator will want to be able to make some guarantees that the VM doesn't run a bunch of apps that consume the entire physical memory even if the apps don't individually exceed their maximum memory limit. This is especially important in a multi-tenant scenario.
One mechanism to deal with this is an "over-commit factor". This is what Dan Mikusa's link was about in case you didn't read it yet. If you want absolute guarantees that the VM will only have work scheduled on it such that applications cannot consume more memory than what's "guaranteed" to them by whatever their max memory limits are set to, you'll want an overcommit factor on memory of 1. An overcommit factor of 2 means that on a 7.5G VM, you could allocate containers whose sum total of their max memory limits was up to 15G, and you'd be fine as long as you can trust the containers to not consume, in total, more than 7.5G of real memory.
The DEA architecture supports setting the overcommit factors, I'm not sure whether Diego supports this (yet).
The two concepts Deepak brings up, resource reclamation and predictive analytics, are both pretty cool ideas. But these are not currently supported in Cloud Foundry.
Best, Amit
toggle quoted messageShow quoted text
On Thu, Mar 10, 2016 at 7:54 AM, Stanley Shen <meteorping(a)gmail.com> wrote: Yes, it's one way but it's not flexible, and scale app need to restart the app as well. As I said I may have some heavy operations which will definitely need more than 2G.
In my opinion the ideal way is that we just set a maximum value for each process, but during the running of the process, we don't pre-allocate the memory as we specify as the maximum in deployment.
I suggest you manually “cf scale -m 2G“ after your app has booted. Type “cf scale --help” for more info.
Le 9 mars 2016 à 04:09, Stanley Shen <meteorping(a)gmail.com> a écrit :
Hello, all
When pushing an application to CF, we need to define its disk/memory
limitation.
The memory limitation is just the possible maximum value will be needed in this
application, but in most time, we don't need so much memory.
For example, I have one application which needs at most 5G memory at startup some
some specific operation, but in most time it just needs 2G.
So right now I need to specify 5G in deployment manifest, and 5G memory is
allocated.
Take m3.large VM for example, it has 7.5G. Right now we can only push one application on it, but ideally we
should can push more
applications, like 3 since only 2G is needed for each application.
Can the resources of a IDLE application be shared by other
applications?
It seems right now all the resources are pre-allocated when pushing application, it
will not be released even I stopped the application.
|
|
Re: Domain change for CF212 -> how to change the domain for a service broker correctly?
If you changed the app domain in your deployment manifest, it doesn't delete the old shared domain (since other apps might still be using that domain), it actually just adds a new shared app domain. If your broker is running as an app on the platform, it's still bound to the old route using the old domain. You need to bind a new route with a new domain to it, then redoing your update-service-broker should work. On Fri, Mar 11, 2016 at 7:52 AM, Rafal Radecki <radecki.rafal(a)gmail.com> wrote: Hi.
I am in process of changing the domain name for a service broker. I managed to redeploy CF with updated deployment manifest and all vms which form the deployment are in running state. I can login to the new endpoint and list apps, service brokers, etc. I am not able to update the service brokers though:
$ cf service-brokers | grep broker01 broker01 http://broker01-broker.old_domain
$ cf update-service-broker broker01 user pass http://broker01-broker.new_domain Updating service broker broker01 as admin... FAILED Server error, status code: 502, error code: 10001, message: The service broker rejected the request to http://broker01-broker.new_domain/v2/catalog. Status Code: 404 Not Found, Body: 404 Not Found: Requested route ('broker01-broker.new_domain') does not exist.
What is the correct way to update the domain for them?
BR, Rafal.
|
|
Re: User defined variable "key" validation doesn't happen at cf set-env phase
Hi Nick,
Thanks for the clarification! But as a developer I would expect the restart/restage of the application fails if the environment variables is invalid. However, this is not the case always - if the var name has special characters such as @$ etc., it fails to restart, the user can then trouble-shoot to find the issue. But in cases where the var name has . or -, the application restarts/restages successfully. The app logs, however, contains ERR message ERR /bin/bash: line 17: export: `test-dash=testing special chars': not a valid identifier At runtime, these invalid variables are not accessible by the application.
As a developer, I would expect, the application fails at an early stage during restart.
Kind Regards, Padma
|
|
Hi everyone,
I'm new to learning and understanding the CF architecture, and I realized when I downloaded CF 231, there didn't seem to be any Diego architecture as listed in the documentation online. Rather, it was the pre-Diego architecture. I was wondering if there was even a version of an open source CF released with Diego? As well as, whether or not CF 231+ versions would be switching to Diego architecture.
The documentation does not exactly follow how CF 231 works, and seems more up-to-date than the actual version itself. Could anyone clarify to a beginner like me what is going on?
Thanks
|
|
Proposal for new OAuth grant type in UAA
Vineet Banga <vineetbanga1@...>
|
|
Re: Announcing the Cloud Foundry Java Client 2.0.0.M1
Mike Youngstrom <youngm@...>
Nice work! This looks like an excellent client library. I'm glad it supports v2 and v3 apis.
Any thoughts or plans for producing uaa and loggregator/firehose clients as well? Perhaps as separate modules? I see limited uaa auth and limited loggregator support in cloudfoundry-client.
I wonder if we could get more componentization in the client library by renaming "cloudfoundry-client" to "cloud-controller-client" and adding a "uaa-client (making it fully featured eventually)" and "loggregator-client" both probably included in "cloudfoundry-operations"
Thoughts?
Mike
toggle quoted messageShow quoted text
On Fri, Mar 11, 2016 at 2:36 PM, Ben Hale <bhale(a)pivotal.io> wrote: As some of you may know, the Cloud Foundry Java Client has gone through various levels of neglect over the past couple of years. Towards the end of last year, my team started working on the project with the goal of making it a piece of software that we were not only proud of, but that we could build towards the future with. With that in mind, I’m exceedingly pleased to announce our 2.0.0.M1 release.
We’ve taken the opportunity of this major release to reset what the project is:
* What was a once hodgepodge of Java APIs both mapping directly onto the REST APIs and onto higher-level abstractions is now two clearly delineated APIs. We expose a `-client` API mapping to the REST calls and an `-operations` API mapping to the higher-level abstractions that roughly match the CLI. * What once was an implementation of a subset of the Cloud Foundry APIs is now a target of implementing every single REST call exposed by any Cloud Foundry component (nearly 500 individual URIs across 4 components) * What was once a co-mingled interface and Spring-based implementation is now an airtight separation between the two allowing alternate implementations (addressing one of the largest complaints about the previous generation) * Finally, we’ve chosen to make the API reactive, building on top of Project Reactor, but interoperable with any Reactive Streams compatible library
Obviously, the biggest change in this list is the move to a reactive API. This decision was not take lightly. In fact our original V2 implementation was imperative following the pattern of the V1 effort. However, after consulting with both internal and external users, we found that many teams were viewing “blocking” APIs as a serious issue as they implemented their high-performance micro-service architectures.
As an example, we worked very deeply with a team right at the beginning as they were creating a new Cloud Foundry Routing Service. Since each HTTP request into their system went though this service, performance was a primary concern and they were finding that the blocking bit of their implementation (Java Client V1) was the biggest hit for them. We’ve mitigated a lot of the performance bottle neck with what we’ve got today, but for M2 we’re planning on removing that last blocking component completely and moving to a full non-blocking network stack. This isn’t an isolated use case either, we’ve been seeing a lot of this theme; micro-service architectures require throughput that can’t be achieved without either a non-blocking stack or “massive” horizontal scaling. Most companies would prefer the former simply due to cost.
As a general rule you can make a reactive api blocking (just tack `.get()` onto the end of any Reactor flow) but cannot make a blocking API non-blocking (see the insanity we do to fake it, with non-optimal results, on RestTemplate[1] today). So since we had a strong requirement to support this non-blocking design we figured that going reactive-first was the most flexible design we could choose.
If you want to get started with this new version, I’m sad to say that we’re a bit lacking in the “on boarding experience” at the moment. We don’t have examples or a user-guide, but the repository’s README[2] is a good place to start. As you progress deeper into using the client, you can probably piece something together from the Javadocs[3] and the Cloud Foundry API[4] documentation. Finally, the best examples are found in our integration tests[5]. Improving this experience is something we’re quite sensitive to, so you can expect significant improvements here.
The reason that we’re laying this foundation is you. We’re already seeing customers adopting (and contributing back to!) the project, but we’ve really done it to accelerate the entire Cloud Foundry ecosystem. If you need to interact with Cloud Foundry, I want you to be using the Java Client. If you find that it’s not the best way for you to get your work done, I want you to tell me, loudly and often. We’re also excited about being in the vanguard of reactive APIs within the Java ecosystem. Having recently experienced it, I’m sure that this transition will not be trivial, but I am sure that it’ll be worthwhile.
A special thanks goes out to Scott Fredrick (Pivotal) for nursing the project along far enough for us to take over, Benjamin Einaudi (Orange Telecom) for his constant submissions, and of course Chris Frost (Pivotal), Glyn Normington (Pivotal), Paul Harris (Pivotal), and Steve Powell (Pivotal) for doing so much of the hard work.
-Ben Hale Cloud Foundry Java Experience
[1]: https://github.com/cloudfoundry/cf-java-client/blob/c35c20463fab0e7730bf807af9e84ac186cdb3c2/cloudfoundry-client-spring/src/main/lombok/org/cloudfoundry/spring/util/AbstractSpringOperations.java#L73-L127 [2]: https://github.com/cloudfoundry/cf-java-client [3]: https://github.com/cloudfoundry/cf-java-client#documentation [4]: https://apidocs.cloudfoundry.org/latest-release/ [5]: https://github.com/cloudfoundry/cf-java-client/blob/master/integration-test/src/test/java/org/cloudfoundry/operations/RoutesTest.java#L114-L135
|
|
Announcing the Cloud Foundry Java Client 2.0.0.M1
As some of you may know, the Cloud Foundry Java Client has gone through various levels of neglect over the past couple of years. Towards the end of last year, my team started working on the project with the goal of making it a piece of software that we were not only proud of, but that we could build towards the future with. With that in mind, I’m exceedingly pleased to announce our 2.0.0.M1 release. We’ve taken the opportunity of this major release to reset what the project is: * What was a once hodgepodge of Java APIs both mapping directly onto the REST APIs and onto higher-level abstractions is now two clearly delineated APIs. We expose a `-client` API mapping to the REST calls and an `-operations` API mapping to the higher-level abstractions that roughly match the CLI. * What once was an implementation of a subset of the Cloud Foundry APIs is now a target of implementing every single REST call exposed by any Cloud Foundry component (nearly 500 individual URIs across 4 components) * What was once a co-mingled interface and Spring-based implementation is now an airtight separation between the two allowing alternate implementations (addressing one of the largest complaints about the previous generation) * Finally, we’ve chosen to make the API reactive, building on top of Project Reactor, but interoperable with any Reactive Streams compatible library Obviously, the biggest change in this list is the move to a reactive API. This decision was not take lightly. In fact our original V2 implementation was imperative following the pattern of the V1 effort. However, after consulting with both internal and external users, we found that many teams were viewing “blocking” APIs as a serious issue as they implemented their high-performance micro-service architectures. As an example, we worked very deeply with a team right at the beginning as they were creating a new Cloud Foundry Routing Service. Since each HTTP request into their system went though this service, performance was a primary concern and they were finding that the blocking bit of their implementation (Java Client V1) was the biggest hit for them. We’ve mitigated a lot of the performance bottle neck with what we’ve got today, but for M2 we’re planning on removing that last blocking component completely and moving to a full non-blocking network stack. This isn’t an isolated use case either, we’ve been seeing a lot of this theme; micro-service architectures require throughput that can’t be achieved without either a non-blocking stack or “massive” horizontal scaling. Most companies would prefer the former simply due to cost. As a general rule you can make a reactive api blocking (just tack `.get()` onto the end of any Reactor flow) but cannot make a blocking API non-blocking (see the insanity we do to fake it, with non-optimal results, on RestTemplate[1] today). So since we had a strong requirement to support this non-blocking design we figured that going reactive-first was the most flexible design we could choose. If you want to get started with this new version, I’m sad to say that we’re a bit lacking in the “on boarding experience” at the moment. We don’t have examples or a user-guide, but the repository’s README[2] is a good place to start. As you progress deeper into using the client, you can probably piece something together from the Javadocs[3] and the Cloud Foundry API[4] documentation. Finally, the best examples are found in our integration tests[5]. Improving this experience is something we’re quite sensitive to, so you can expect significant improvements here. The reason that we’re laying this foundation is you. We’re already seeing customers adopting (and contributing back to!) the project, but we’ve really done it to accelerate the entire Cloud Foundry ecosystem. If you need to interact with Cloud Foundry, I want you to be using the Java Client. If you find that it’s not the best way for you to get your work done, I want you to tell me, loudly and often. We’re also excited about being in the vanguard of reactive APIs within the Java ecosystem. Having recently experienced it, I’m sure that this transition will not be trivial, but I am sure that it’ll be worthwhile. A special thanks goes out to Scott Fredrick (Pivotal) for nursing the project along far enough for us to take over, Benjamin Einaudi (Orange Telecom) for his constant submissions, and of course Chris Frost (Pivotal), Glyn Normington (Pivotal), Paul Harris (Pivotal), and Steve Powell (Pivotal) for doing so much of the hard work. -Ben Hale Cloud Foundry Java Experience [1]: https://github.com/cloudfoundry/cf-java-client/blob/c35c20463fab0e7730bf807af9e84ac186cdb3c2/cloudfoundry-client-spring/src/main/lombok/org/cloudfoundry/spring/util/AbstractSpringOperations.java#L73-L127[2]: https://github.com/cloudfoundry/cf-java-client[3]: https://github.com/cloudfoundry/cf-java-client#documentation[4]: https://apidocs.cloudfoundry.org/latest-release/[5]: https://github.com/cloudfoundry/cf-java-client/blob/master/integration-test/src/test/java/org/cloudfoundry/operations/RoutesTest.java#L114-L135
|
|
Re: DEA Chargeback w/ overcommit
Mike Youngstrom <youngm@...>
We heavy over commit our DEAs (like 4x) and we charge the customer the memory they've requested. But we also ensure our DEAs have in total some percentage of free memory on the DEAs just in case. So, we just charge our customers something close to that amount more than raw RAM costs so they share the cost of the overhead. This cost shrinks as the deployment gets utilized more.
Mike
toggle quoted messageShow quoted text
On Fri, Mar 11, 2016 at 12:26 PM, John Wong <gokoproject(a)gmail.com> wrote: Hi
Given a DEA with 15GB, overcommit factor = 2, total "memory" is 30GB. Ideally we can push up to 30 app instances per host, if each app instance requires 1GB mem allocation.
Supposed the environment has 3 DEAs (max = 90GB) and we need to place a total of 40GB of app instances:
1. should I kill the 3rd DEA given I still have "20GB" left and provision the 3rd one when I am about to run low? 2. do you consider overcommit factor in your chargeback? i.e. despite you can get up to 30GB, you charge customer the physical RAM (15). In this case, you still charge the customer
n* box_price * (percentage of mem consumption / total physical memory) = 3 * box_price * (40/45) ?
3. would I actually see "unavailable stager" error even with overcommit, for a 40/90 deployment?
Thanks.... I hope these questions make sense.
John
|
|
DEA Chargeback w/ overcommit
Hi
Given a DEA with 15GB, overcommit factor = 2, total "memory" is 30GB. Ideally we can push up to 30 app instances per host, if each app instance requires 1GB mem allocation.
Supposed the environment has 3 DEAs (max = 90GB) and we need to place a total of 40GB of app instances:
1. should I kill the 3rd DEA given I still have "20GB" left and provision the 3rd one when I am about to run low? 2. do you consider overcommit factor in your chargeback? i.e. despite you can get up to 30GB, you charge customer the physical RAM (15). In this case, you still charge the customer
n* box_price * (percentage of mem consumption / total physical memory) = 3 * box_price * (40/45) ?
3. would I actually see "unavailable stager" error even with overcommit, for a 40/90 deployment?
Thanks.... I hope these questions make sense.
John
|
|
Domain change for CF212 -> how to change the domain for a service broker correctly?
Hi. I am in process of changing the domain name for a service broker. I managed to redeploy CF with updated deployment manifest and all vms which form the deployment are in running state. I can login to the new endpoint and list apps, service brokers, etc. I am not able to update the service brokers though: $ cf service-brokers | grep broker01 broker01 http://broker01-broker.old_domain$ cf update-service-broker broker01 user pass http://broker01-broker.new_domainUpdating service broker broker01 as admin... FAILED Server error, status code: 502, error code: 10001, message: The service broker rejected the request to http://broker01-broker.new_domain/v2/catalog. Status Code: 404 Not Found, Body: 404 Not Found: Requested route ('broker01-broker.new_domain') does not exist. What is the correct way to update the domain for them? BR, Rafal.
|
|
Re: Org and Space Quota management
Hi Pradma,
Sorry for the delayed response. You've accurately characterized the current behavior. One additional item to note is the error message accurately reflects which quota is not allowing new resources to be created.
I'll take your concerns to the CAPI engineering team and determine if there are any improvements we could make in regards to quota. I would also like to hear from any other community members on whether or not the current behavior is preferable before prioritizing any changes.
Thanks,
Nick
Nicholas Calugar CAPI Product Manager
toggle quoted messageShow quoted text
On Sat, Mar 5, 2016 at 8:51 AM B, Padmashree <padmashree.b(a)sap.com> wrote: Hi,
Any inputs on the current behavior of the APIs would be of great help, thanks !
Regards,
Padma
This e-mail may contain trade secrets or privileged, undisclosed, or otherwise confidential information. If you have received this e-mail in error, you are hereby notified that any review, copying, or distribution of it is strictly prohibited. Please inform us immediately and destroy the original transmittal. Thank you for your cooperation.
|
|
Re: Adding previous_instances and previous_memory fields to cf_event

Hristo Iliev
Hi,
Yep - we call purge only once. And that's why we need to disable it afterwards. It feels a bit odd to know there is an endpoint that is there to be called once. I wonder if we can add some auto-destruct mechanism :)
We realize that the CC DB stores the events only temporary. That's exactly the reason we want to have an additional fields in the events, so we can easily tell what changed without complex logic or support/operation load from additional DB infrastructure.
Of course if we are missing something and this can be done easier we'll be glad to hear it :)
Regards, Hristo Iliev
2016-03-11 9:04 GMT+02:00 Dieu Cao <dcao(a)pivotal.io>:
toggle quoted messageShow quoted text
Hi Hristo,
Correct me if I'm wrong but it sounds like you are calling purge multiple times. Am I misunderstanding the work flow you are describing?
Purge should only be called only one time EVER on any deployment. It should not be called on each billing cycle. Cloud contoller will purge older billing events itself based on the configured cc.app_usage_events.cutoff_age_in_days and cc.service_usage_events.cutoff_age_in_days which both default to 31 days.
-Dieu
On Wed, Mar 9, 2016 at 11:57 PM, Hristo Iliev <hsiliev(a)gmail.com> wrote:
Hi Dieu,
We are polling app-usage-events with Abacus, but because of purge the events may be out of order right after billing epoch started. But that's only part of the problem.
To consume app-usage-events every integrator needs to build additional infrastructure like: - simple filter, loadbalancer or API management product to disable purging once billing epoch started - DB replication software that pulls data and deals with wrongly ordered events after purge (we use abacus-cf-bridge) - the Data warehouse described in the doc you sent
Introducing the previous values in the usage events will help us get rid of most of the infrastructure we need in order to be able to deal with usage events, before they even reach a billing system. We won't need to care for purge calls or additional db, but instead simply pull events. The previous values help us to: - use formulas that do not care for the order of events (solves the purge problem) - get the info about a billing relevant change (we don't have to cache, access DB or scan a stream to know what changed) - simplify the processing logic in Abacus (or other metering/aggregation solution)
We now pull the usage events, but we would like to be notified to offload the CC from the constant /v2/app_usage_events calls. This however will not solve any of the problems we now have and in fact may mess the ordering of the events.
Regards, Hristo Iliev
2016-03-10 6:32 GMT+02:00 Dieu Cao <dcao(a)pivotal.io>:
We don't advise using /v2/events for metering/billing for precisely the reason you mention, that order of events is not guaranteed.
You can find more information about app usage events and service usage events which are guaranteed to be in order here: http://docs.cloudfoundry.org/running/managing-cf/usage-events.html
-Dieu CF Runtime PMC Lead
On Wed, Mar 9, 2016 at 10:27 AM, KRuelY <kevinyudhiswara(a)gmail.com> wrote:
Hi,
I am currently working on metering runtime usage, and one issue I'm facing is that there is a possibility that usage submission comes in out of order(due to network error / other possibilities). Before the issue, the way metering runtime usage works is quiet simple. There is an app that will look at cf_events and submit usages to [cf-abacus](https://github.com/cloudfoundry-incubator/cf-abacus).
{ "metadata": { "guid": "40afe01a-b15a-4b8d-8bd1-e36a0ba2f6f5", "url": "/v2/app_usage_events/40afe01a-b15a-4b8d-8bd1-e36a0ba2f6f5", "created_at": "2016-03-02T09:48:09Z" }, "entity": { "state": "STARTED", "memory_in_mb_per_instance": 512, "instance_count": 1, "app_guid": "a2ab1b5a-94c0-4344-9a71-a1d2b11f483a", "app_name": "abacus-usage-collector", "space_guid": "d34d770d-4cd0-4bdc-8c83-8fdfa5f0b3cb", "space_name": "dev", "org_guid": "238a3e78-3fc8-4542-928a-88ee99643732", "buildpack_guid": "b77d0ef8-da1f-4c0a-99cc-193449324706", "buildpack_name": "nodejs_buildpack", "package_state": "STAGED", "parent_app_guid": null, "parent_app_name": null, "process_type": "web" } }
The way this app works is by looking at the state. If the state is STARTED, it will submit usage to abacus with the instance_memory = memory_in_mb_per_instance, running_instances = instance_count, and since = created_at. If the state is STOPPED, it will submit usage to abacus with the instance_memory = 0, running_instances = 0, and since = created_at.
In ideal situation, where there is no out of order submission this is fine. 'Simple, but Exaggerated' Example: Usage instance_memory = 1GB, running_instances = 1, since = 3/9 00:00 comes in. (STARTED) Usage instance_memory = 0GB, running_instances = 0, since = 3/10 00:00 comes in. (STOPPED) Then Abacus know that the app consumed 1GB * (3/10 - 3/9 = 24 hours) = 24 GB-hour.
But when the usage comes in out of order: Usage instance_memory = 0GB, running_instances = 0, since = 3/10 00:00 comes in. (STOPPED) Usage instance_memory = 1GB, running_instances = 1, since = 3/9 00:00 comes in. (STARTED) The formula that Abacus currently have would not works.
Abacus has another formula that would take care of this out of order submission, but it would only works if we have previous_instance_memory and previous_running_instances.
When looking for a way to have this fields, we concluded that the cleanest way would be to add previous_memory_in_mb_per_instance and previous_instance_count to the cf_event. It will make App reconfigure or cf scale makes more sense too because currently cf scale is a STOP and a START.
To sum up, the cf_event state submitted would include information:
// Starting { "state": "STARTED", "memory_in_mb_per_instance": 512, "instance_count": 1, "previous_memory_in_mb_per_instance": 0, "previous_instance_count": 0 }
// Scaling up { "state": "SCALE"?, "memory_in_mb_per_instance": 512, "instance_count": 2, "previous_memory_in_mb_per_instance": 512, "previous_instance_count": 1 }
// Scale down { "state": "SCALE"?, "memory_in_mb_per_instance": 512, "instance_count": 1, "previous_memory_in_mb_per_instance": 512, "previous_instance_count": 2 }
// Stopping { "state": "STOPPED", "memory_in_mb_per_instance": 0, "instance_count": 0, "previous_memory_in_mb_per_instance": 512, "previous_instance_count": 1 }
Any thoughts/feedbacks/guidance?
-- View this message in context: http://cf-dev.70369.x6.nabble.com/cf-dev-Adding-previous-instances-and-previous-memory-fields-to-cf-event-tp4100.html Sent from the CF Dev mailing list archive at Nabble.com.
|
|
Re: CF deployment with Diego support only ?

Benjamin Gandon
Perfect. Thanks Eric!
toggle quoted messageShow quoted text
Le 11 mars 2016 à 11:18, Eric Malm <emalm(a)pivotal.io> a écrit :
Hi, Benjamin,
Yes, in addition to setting the runner and hm9000 instance counts to 0 in the CF manifest, those two CC properties should be all you need to change to make your CF+Diego deployment Diego-only. CC Admins can still move apps between backends, but no other users would be able to.
Thanks, Eric, CF Runtime Diego PM
On Thu, Mar 10, 2016 at 4:42 AM, Benjamin Gandon <benjamin(a)gandon.org <mailto:benjamin(a)gandon.org>> wrote: That's right Amit, but it was just a typo by me. I meant setting instances counts to zero for “runner_z*” and “hm9000_z*”.
I saw in a-detailed-transition-timeline <https://github.com/cloudfoundry-incubator/diego-design-notes/blob/master/migrating-to-diego.md#a-detailed-transition-timeline> that those two properties are also of help: - cc.default_to_diego_backend=true - cc.users_can_select_backend=false
So all in all, is that really all what needs to be done ?
/Benjamin
Le 10 mars 2016 à 09:07, Amit Gupta <agupta(a)pivotal.io <mailto:agupta(a)pivotal.io>> a écrit :
You need the api jobs, those are the cloud controllers! Set the runner and hm9000 jobs to 0 instances, or even remove them from your deployment manifest altogether.
On Wed, Mar 9, 2016 at 11:39 PM, Benjamin Gandon <benjamin(a)gandon.org <mailto:benjamin(a)gandon.org>> wrote: Hi cf-dev,
For a fresh new deployment of cf-release <https://github.com/cloudfoundry/cf-release>, I wonder how the default manifests stubs and templates should be modified to remove unnecessary support for DEA in favor of Diego ?
Indeed, I’m starting with a working deployment of cf+diego. And now I want to wipe out those ancient DEA and HM9000 I don’t need.
I tried to draw inspiration from the MicroPCF main deployment manifest <https://github.com/pivotal-cf/micropcf/blob/master/images/manifest.yml>. (Are there any other sources for Diego-only CF deployments BTW?) At the moment, all I see in this example is that I need to set « instances: » counts to zero for both « api_z* » and « hm9000_z* » jobs.
Is this sufficient ? Should I perform some more adaptations ? Thanks for your guidance.
/Benjamin
|
|
Re: CF deployment with Diego support only ?
Hi, Benjamin, Yes, in addition to setting the runner and hm9000 instance counts to 0 in the CF manifest, those two CC properties should be all you need to change to make your CF+Diego deployment Diego-only. CC Admins can still move apps between backends, but no other users would be able to. Thanks, Eric, CF Runtime Diego PM On Thu, Mar 10, 2016 at 4:42 AM, Benjamin Gandon <benjamin(a)gandon.org> wrote: That's right Amit, but it was just a typo by me. I meant setting instances counts to zero for “runner_z*” and “hm9000_z*”.
I saw in a-detailed-transition-timeline <https://github.com/cloudfoundry-incubator/diego-design-notes/blob/master/migrating-to-diego.md#a-detailed-transition-timeline> that those two properties are also of help: - cc.default_to_diego_backend=true - cc.users_can_select_backend=false
So all in all, is that really all what needs to be done ?
/Benjamin
Le 10 mars 2016 à 09:07, Amit Gupta <agupta(a)pivotal.io> a écrit :
You need the api jobs, those are the cloud controllers! Set the runner and hm9000 jobs to 0 instances, or even remove them from your deployment manifest altogether.
On Wed, Mar 9, 2016 at 11:39 PM, Benjamin Gandon <benjamin(a)gandon.org> wrote:
Hi cf-dev,
For a fresh new deployment of cf-release <https://github.com/cloudfoundry/cf-release>, I wonder how the default manifests stubs and templates should be modified to remove unnecessary support for DEA in favor of Diego ?
Indeed, I’m starting with a working deployment of cf+diego. And now I want to wipe out those ancient DEA and HM9000 I don’t need.
I tried to draw inspiration from the MicroPCF main deployment manifest <https://github.com/pivotal-cf/micropcf/blob/master/images/manifest.yml>. (Are there any other sources for Diego-only CF deployments BTW?) At the moment, all I see in this example is that I need to set « instances: » counts to zero for both « api_z* » and « hm9000_z* » jobs.
Is this sufficient ? Should I perform some more adaptations ? Thanks for your guidance.
/Benjamin
|
|
Re: Required manifest changes for Cloud Foundry
Good catch. Created issue: https://github.com/cloudfoundry/buildpack-releases/issues/2On Thu, Mar 10, 2016 at 10:15 PM, Benjamin Gandon <benjamin(a)gandon.org> wrote: Indeed I scripted a couple of « bosh create release » / « bosh upload release » and then bosh-workspace is happy with it.
It worked like a charm. Buildpacks just end up being there, automatically updated. That’s Great! I really look forward to being able to update the java-buildpack in the same way!
I’m not familiar to the inner workflow of bosh-workspace either. I use it because what you type just makes sense. I suppose bosh-workspace uploads the manifest release and tell the director to recreate the release tarball from that. It’s time effective when your director has a much wider bandwidth than your Bosh CLI.
By the way, all config/final.yml for all buildpacks contain the same settings:
blobstore: file_name: stacks provider: s3 options: bucket_name: pivotal-buildpacks folder: tmp/builpacks-release-blobs
Is it normal that the filenames are all « stacks » for all buildpacks? I’m afraid these settings might not have been properly set. This would explain the whole thing.
/Benjamin
Le 11 mars 2016 à 00:04, Amit Gupta <agupta(a)pivotal.io> a écrit :
I did
cd cf-release git fetch origin git checkout develop git pull --ff-only ./scripts/update bosh create release --with-tarball
And also
cd src/buildpacks/binary-buildpack-release/ git fetch origin (confirmed HEAD was pointed at origin/master) bosh create release
Everything worked fine for me, meaning it was able to sync blobs down from the remote blobstores. I'm not familiar with bosh-workspace, and it's not clear to me why it's trying to upload anything (e.g. Uploading ' ruby-buildpack-release/1.6.14').
On Thu, Mar 10, 2016 at 2:48 PM, Benjamin Gandon <benjamin(a)gandon.org> wrote:
And btw Amit, it looks like the java-buildpack v3.6 is here with its fellows:
https://github.com/cloudfoundry/buildpack-releases/blob/master/java-buildpack-release/releases/java-buildpack-release/java-buildpack-release-3.6.yml For example the « e6ff7d79e50f0aaafa92f100f346e648c503ab17 » SHA in the error below (when recreating the java-buildpack-release) is the one of the first blob in the release manifest above.
Le 10 mars 2016 à 23:24, Benjamin Gandon <benjamin(a)gandon.org> a écrit :
No no no, these are not SHAs of cf-release, but those of all the buildpack-releases indeed. Looks like no blobs of these releases are actually available online, are they?
I'm running the standard middle step "bosh prepare deployment" provided by bosh-workspace. (See <https://github.com/cloudfoundry-incubator/bosh-workspace>)
/Benjamin
Le 10 mars 2016 à 21:14, Amit Gupta <agupta(a)pivotal.io> a écrit :
At the time of the email, the java buildpack hadn't been extracted into a separate release yet. I believe it has now, and that will be reflected in CF v232.
What command did you run? What SHA of cf-release have you checked out?
On Thu, Mar 10, 2016 at 12:10 PM, Benjamin Gandon <benjamin(a)gandon.org> wrote:
Amit, just for me to be sure, why didn’t you list the java-buildpack?
Also, have the blobs properly been uploaded? I copy below the BOSH errors I get:
With binary-buildpack:
Uploading 'binary-buildpack-release/1.0.1' Recreating release from the manifest MISSING Cannot find package with checksum `413ce11236f87273ba8a9249b6e3bebb3d0db92b'
With go-buildpack:
Uploading 'go-buildpack-release/1.7.3' Recreating release from the manifest MISSING Cannot find package with checksum `300760637ee0babd5fddd474101dfa634116d9c4'
With java-buildpack:
Uploading 'java-buildpack-release/3.6' Recreating release from the manifest MISSING Cannot find package with checksum `e6ff7d79e50f0aaafa92f100f346e648c503ab17'
With nodejs-buildpack:
Uploading 'nodejs-buildpack-release/1.5.7' Recreating release from the manifest MISSING Cannot find package with checksum `b3edbcfb9435892749dffcb99f06d00fb4c59c5b'
With php-buildpack:
Uploading 'php-buildpack-release/4.3.6' Recreating release from the manifest MISSING Cannot find package with checksum `fbc784608ffa3ceafed1810b69c12a7277c86ee0'
With python-buildpack:
Uploading 'python-buildpack-release/1.5.4' Recreating release from the manifest MISSING Cannot find package with checksum `7e2377ccd9df10b21aba49c8e95338a0b1b3b92e'
With ruby-buildpack:
Uploading 'ruby-buildpack-release/1.6.14' Recreating release from the manifest MISSING Cannot find package with checksum `362282d45873634db888a609cd64d7d70e9f4be2'
With staticfile-buildpack:
Uploading 'staticfile-buildpack-release/1.3.2' Recreating release from the manifest MISSING Cannot find package with checksum `06382f7c804cc7f01a8dc78ca9c91e9b7f4712cc'
Are these on a specific blobstore I should point my deployment manifest at?
/Benjamin
Le 18 févr. 2016 à 19:31, Amit Gupta <agupta(a)pivotal.io> a écrit :
Hey developers,
The buildpacks team has recently extracted the buildpacks as separate releases. As we transition to deploying CF via a bunch of composed releases, for now we're making the change more transparent, by submoduling and symlinking the buildpacks releases back into cf-release. This requires some manifest changes: buildpacks are now colocated with cloud controller, rather than package dependencies of cloud controller.
If you are using spiff to generate manifests, and are not overriding the templates/jobs colocated on the api_zN jobs, you can ignore this email. If you are overriding the api_zN templates in your stub, or if you are not using spiff, you will need to add the following:
templates: - name: consul_agent release: cf + - name: go-buildpack + release: cf + - name: binary-buildpack + release: cf + - name: nodejs-buildpack + release: cf + - name: ruby-buildpack + release: cf + - name: php-buildpack + release: cf + - name: python-buildpack + release: cf + - name: staticfile-buildpack + release: cf - name: cloud_controller_ng release: cf
Please see this commit ( https://github.com/cloudfoundry/cf-release/commit/549e5a8271bbf0d30efdb84f381f38c8bf22099d) for more details.
Best, Amit
|
|
Re: Space Manager visibility of an app's environment variables
Hey Mike,
I think that sounds like a reasonable approach to the problem. I'm not aware of any other things that would change under that definition. Added Nick, the currently dojo-ing new PM for CAPI to add this for future consideration.
What do others think about Mike's delineation above?
-Dieu
toggle quoted messageShow quoted text
On Mon, Feb 29, 2016 at 12:52 PM, Mike Youngstrom <youngm(a)gmail.com> wrote: Thanks Dieu,
I don't have a problem with the change request. It just seems like an oddly specific change given the current not very well defined set of roles we have.
I'd like to see a more clear definition of what our roles should generally be able to do and change CC to match the definition rather than making one off specific changes like this customized by feature flag.
For example, I'm assuming this request must be coming from a customer that believes a manager should be able to see everything in a space but not change stuff.
Let me propose some definitions based on this requested change.
Manager: * See everything. * Change only user access.
Developer: * See everything * Change everything except for user access.
Auditor: * See everything except for potentially sensitive data. * Change nothing.
Under loose definitions like the ones above I would support giving Managers access to app environment. However, by defining Manager like I have are there other things a Manager should be able to see that they currently cannot? If so now that we have a definition we can fix that.
In addition, once we have a clear definition for CC roles, it becomes easier to make more useful feature flags to customize this functionality. For example, instead of making a feature flag to allow managers to see environment variables you can make a more general and clear flag that, for example, denies managers the ability to see potentially sensitive data or a feature flag that allows Auditors to see sensitive data if someone requests that.
Thoughts?
Mike
On Sun, Feb 28, 2016 at 7:24 PM, Dieu Cao <dcao(a)pivotal.io> wrote:
Something like custom roles and finer grained permissions is something we would like to look at in the future, but the CAPI team is currently focused on Tasks, v3, and soon Elastic Clusters, which will occupy us for the next few months.
Thanks for sharing your point of view, Bernd. I think that makes sense for certain deployments.
For the near term, if we do introduce this functionality, we may introduce it with some feature flags, with the intention to some day deprecate that feature flag when custom roles and finer grained permissions are in place.
Thanks for the feedback.
-Dieu CF CAPI PM
On Sat, Feb 27, 2016 at 12:18 PM, Krannich, Bernd <bernd.krannich(a)sap.com
wrote: I guess it all depends on what you use CF for. I guess there’s a spectrum between:
- Easy to use, developer-centric, company-internal environment: Make it as easy as possible to grant people the permissions they need. - Battle-hardened, multi-tenant (=multi-customer) enterprise platform in highly regulated environments: Make it even possible to apply separation of duties which I guess most often means less convenience. - From Mike’s parallel mail: > What I don't want to see is a gradual promotion of developer capabilities into Manager making it not clear what Mangers can do and what they cannot. - Yes, for this second type of usage I’d completely agree here.
One solution might be to allow operators to create and manage custom roles, like we do with quotas. If the flexibility to implement business rules is accessible under the "role" abstraction, we can afford friendlier/more naive defaults.
That’s a great idea. I’m not clear if this is then a Cloud Controller API discussion only or if – as it seems to me – this involves also efforts in the UAA area.
Regards, Bernd
From: Jesse Alford <jalford(a)pivotal.io> Reply-To: "Discussions about Cloud Foundry projects and the system overall." <cf-dev(a)lists.cloudfoundry.org> Date: Saturday 27 February 2016 at 20:38 To: "Discussions about Cloud Foundry projects and the system overall." < cf-dev(a)lists.cloudfoundry.org> Subject: [cf-dev] Re: Re: Re: Re: Re: Re: Space Manager visibility of an app's environment variables
Agreed: the simple/hierarchical model of permissions is less flexible and composable. Having each roll have the permissions of the one "under" it plus a new layer limits the business rules this can be used to implement.
I might even argue that managers shouldn't necessarily have auditor permissions automatically.
The counter argument here is that we're basically either inflicting a poor user experience or forcing clients to take on additional complexity to give users intuitive experiences. One solution might be to allow operators to create and manage custom roles, like we do with quotas. If the flexibility to implement business rules is accessible under the "role" abstraction, we can afford friendlier/more naive defaults.
On Sat, Feb 27, 2016, 11:24 AM Krannich, Bernd <bernd.krannich(a)sap.com> wrote:
Sorry for broadening the discussion but for us it’s the other way round for a similar use case: Having the CF admin role grants you developer rights in all orgs and spaces (for example, you could `cf env` to retrieve service instance credentials) which is something that’s IMHO not desirable from a security/compliance perspective especially when running multiple (external) customers on one CF instance. Customers typically don’t want their providers to be able to be able to see all their data per default. Sure, you can always grant yourself these roles as a CF admin but then there’s audit logging to track those changes.
I guess one could go along a similar line of argumentation for space manager and space developer.
For both cases the thing is: You can achieve the desired behavior by granting more roles but if you combine roles there’s no way to achieve separation of duties.
Regards, Bernd
From: Matt Cholick <cholick(a)gmail.com> Reply-To: "Discussions about Cloud Foundry projects and the system overall." <cf-dev(a)lists.cloudfoundry.org> Date: Saturday 27 February 2016 at 19:04 To: "Discussions about Cloud Foundry projects and the system overall." <cf-dev(a)lists.cloudfoundry.org> Subject: [cf-dev] Re: Re: Re: Re: Space Manager visibility of an app's environment variables
This is a source of confusion for our end users as well. Users will have the space manager role, but not the space developer role, and fail when they first try to push an application.
On Sat, Feb 27, 2016 at 8:16 AM, Mike Youngstrom <youngm(a)gmail.com> wrote:
For some history this is the last discussion I recall having on the subject: https://groups.google.com/a/cloudfoundry.org/d/msg/vcap-dev/8Owzq9pzDSs/FzKX60KBdAkJ
Mike
On Sat, Feb 27, 2016 at 5:23 AM, Tom Sherrod <tom.sherrod(a)gmail.com> wrote:
I'm glad to see this question. Why does a space manager role not include all space developer permissions?
On Thu, Feb 25, 2016 at 11:51 AM, Mike Youngstrom <youngm(a)gmail.com> wrote:
I debated long ago and still believe that a space manager should be able to do everything a space developer can do. Could this be simplified just by making that change? Or are there still reasons to limit a space manager's abilities?
Mike
On Thu, Feb 25, 2016 at 3:54 AM, Dieu Cao <dcao(a)pivotal.io> wrote:
Hi All,
Currently only Space Developers have visibility on the /v2/apps/:guid/env end point which backs the cf cli command `cf env APPNAME`. Please let me know if you have any objections to allowing Space Managers visibility of an app's environment variables. This is something we would like to tackle soon to address some visibility concerns.
-Dieu CF CAPI PM
|
|
Re: Adding previous_instances and previous_memory fields to cf_event
Hi Hristo,
Correct me if I'm wrong but it sounds like you are calling purge multiple times. Am I misunderstanding the work flow you are describing?
Purge should only be called only one time EVER on any deployment. It should not be called on each billing cycle. Cloud contoller will purge older billing events itself based on the configured cc.app_usage_events.cutoff_age_in_days and cc.service_usage_events.cutoff_age_in_days which both default to 31 days.
-Dieu
toggle quoted messageShow quoted text
On Wed, Mar 9, 2016 at 11:57 PM, Hristo Iliev <hsiliev(a)gmail.com> wrote: Hi Dieu,
We are polling app-usage-events with Abacus, but because of purge the events may be out of order right after billing epoch started. But that's only part of the problem.
To consume app-usage-events every integrator needs to build additional infrastructure like: - simple filter, loadbalancer or API management product to disable purging once billing epoch started - DB replication software that pulls data and deals with wrongly ordered events after purge (we use abacus-cf-bridge) - the Data warehouse described in the doc you sent
Introducing the previous values in the usage events will help us get rid of most of the infrastructure we need in order to be able to deal with usage events, before they even reach a billing system. We won't need to care for purge calls or additional db, but instead simply pull events. The previous values help us to: - use formulas that do not care for the order of events (solves the purge problem) - get the info about a billing relevant change (we don't have to cache, access DB or scan a stream to know what changed) - simplify the processing logic in Abacus (or other metering/aggregation solution)
We now pull the usage events, but we would like to be notified to offload the CC from the constant /v2/app_usage_events calls. This however will not solve any of the problems we now have and in fact may mess the ordering of the events.
Regards, Hristo Iliev
2016-03-10 6:32 GMT+02:00 Dieu Cao <dcao(a)pivotal.io>:
We don't advise using /v2/events for metering/billing for precisely the reason you mention, that order of events is not guaranteed.
You can find more information about app usage events and service usage events which are guaranteed to be in order here: http://docs.cloudfoundry.org/running/managing-cf/usage-events.html
-Dieu CF Runtime PMC Lead
On Wed, Mar 9, 2016 at 10:27 AM, KRuelY <kevinyudhiswara(a)gmail.com> wrote:
Hi,
I am currently working on metering runtime usage, and one issue I'm facing is that there is a possibility that usage submission comes in out of order(due to network error / other possibilities). Before the issue, the way metering runtime usage works is quiet simple. There is an app that will look at cf_events and submit usages to [cf-abacus](https://github.com/cloudfoundry-incubator/cf-abacus).
{ "metadata": { "guid": "40afe01a-b15a-4b8d-8bd1-e36a0ba2f6f5", "url": "/v2/app_usage_events/40afe01a-b15a-4b8d-8bd1-e36a0ba2f6f5", "created_at": "2016-03-02T09:48:09Z" }, "entity": { "state": "STARTED", "memory_in_mb_per_instance": 512, "instance_count": 1, "app_guid": "a2ab1b5a-94c0-4344-9a71-a1d2b11f483a", "app_name": "abacus-usage-collector", "space_guid": "d34d770d-4cd0-4bdc-8c83-8fdfa5f0b3cb", "space_name": "dev", "org_guid": "238a3e78-3fc8-4542-928a-88ee99643732", "buildpack_guid": "b77d0ef8-da1f-4c0a-99cc-193449324706", "buildpack_name": "nodejs_buildpack", "package_state": "STAGED", "parent_app_guid": null, "parent_app_name": null, "process_type": "web" } }
The way this app works is by looking at the state. If the state is STARTED, it will submit usage to abacus with the instance_memory = memory_in_mb_per_instance, running_instances = instance_count, and since = created_at. If the state is STOPPED, it will submit usage to abacus with the instance_memory = 0, running_instances = 0, and since = created_at.
In ideal situation, where there is no out of order submission this is fine. 'Simple, but Exaggerated' Example: Usage instance_memory = 1GB, running_instances = 1, since = 3/9 00:00 comes in. (STARTED) Usage instance_memory = 0GB, running_instances = 0, since = 3/10 00:00 comes in. (STOPPED) Then Abacus know that the app consumed 1GB * (3/10 - 3/9 = 24 hours) = 24 GB-hour.
But when the usage comes in out of order: Usage instance_memory = 0GB, running_instances = 0, since = 3/10 00:00 comes in. (STOPPED) Usage instance_memory = 1GB, running_instances = 1, since = 3/9 00:00 comes in. (STARTED) The formula that Abacus currently have would not works.
Abacus has another formula that would take care of this out of order submission, but it would only works if we have previous_instance_memory and previous_running_instances.
When looking for a way to have this fields, we concluded that the cleanest way would be to add previous_memory_in_mb_per_instance and previous_instance_count to the cf_event. It will make App reconfigure or cf scale makes more sense too because currently cf scale is a STOP and a START.
To sum up, the cf_event state submitted would include information:
// Starting { "state": "STARTED", "memory_in_mb_per_instance": 512, "instance_count": 1, "previous_memory_in_mb_per_instance": 0, "previous_instance_count": 0 }
// Scaling up { "state": "SCALE"?, "memory_in_mb_per_instance": 512, "instance_count": 2, "previous_memory_in_mb_per_instance": 512, "previous_instance_count": 1 }
// Scale down { "state": "SCALE"?, "memory_in_mb_per_instance": 512, "instance_count": 1, "previous_memory_in_mb_per_instance": 512, "previous_instance_count": 2 }
// Stopping { "state": "STOPPED", "memory_in_mb_per_instance": 0, "instance_count": 0, "previous_memory_in_mb_per_instance": 512, "previous_instance_count": 1 }
Any thoughts/feedbacks/guidance?
-- View this message in context: http://cf-dev.70369.x6.nabble.com/cf-dev-Adding-previous-instances-and-previous-memory-fields-to-cf-event-tp4100.html Sent from the CF Dev mailing list archive at Nabble.com.
|
|
Re: Required manifest changes for Cloud Foundry

Benjamin Gandon
Indeed I scripted a couple of « bosh create release » / « bosh upload release » and then bosh-workspace is happy with it.
It worked like a charm. Buildpacks just end up being there, automatically updated. That’s Great! I really look forward to being able to update the java-buildpack in the same way!
I’m not familiar to the inner workflow of bosh-workspace either. I use it because what you type just makes sense. I suppose bosh-workspace uploads the manifest release and tell the director to recreate the release tarball from that. It’s time effective when your director has a much wider bandwidth than your Bosh CLI.
By the way, all config/final.yml for all buildpacks contain the same settings:
blobstore: file_name: stacks provider: s3 options: bucket_name: pivotal-buildpacks folder: tmp/builpacks-release-blobs
Is it normal that the filenames are all « stacks » for all buildpacks? I’m afraid these settings might not have been properly set. This would explain the whole thing.
/Benjamin
toggle quoted messageShow quoted text
Le 11 mars 2016 à 00:04, Amit Gupta <agupta(a)pivotal.io> a écrit :
I did
cd cf-release git fetch origin git checkout develop git pull --ff-only ./scripts/update bosh create release --with-tarball
And also
cd src/buildpacks/binary-buildpack-release/ git fetch origin (confirmed HEAD was pointed at origin/master) bosh create release
Everything worked fine for me, meaning it was able to sync blobs down from the remote blobstores. I'm not familiar with bosh-workspace, and it's not clear to me why it's trying to upload anything (e.g. Uploading 'ruby-buildpack-release/1.6.14').
On Thu, Mar 10, 2016 at 2:48 PM, Benjamin Gandon <benjamin(a)gandon.org <mailto:benjamin(a)gandon.org>> wrote: And btw Amit, it looks like the java-buildpack v3.6 is here with its fellows: https://github.com/cloudfoundry/buildpack-releases/blob/master/java-buildpack-release/releases/java-buildpack-release/java-buildpack-release-3.6.yml <https://github.com/cloudfoundry/buildpack-releases/blob/master/java-buildpack-release/releases/java-buildpack-release/java-buildpack-release-3.6.yml> For example the « e6ff7d79e50f0aaafa92f100f346e648c503ab17 » SHA in the error below (when recreating the java-buildpack-release) is the one of the first blob in the release manifest above.
Le 10 mars 2016 à 23:24, Benjamin Gandon <benjamin(a)gandon.org <mailto:benjamin(a)gandon.org>> a écrit :
No no no, these are not SHAs of cf-release, but those of all the buildpack-releases indeed. Looks like no blobs of these releases are actually available online, are they?
I'm running the standard middle step "bosh prepare deployment" provided by bosh-workspace. (See <https://github.com/cloudfoundry-incubator/bosh-workspace <https://github.com/cloudfoundry-incubator/bosh-workspace>>)
/Benjamin
Le 10 mars 2016 à 21:14, Amit Gupta <agupta(a)pivotal.io <mailto:agupta(a)pivotal.io>> a écrit :
At the time of the email, the java buildpack hadn't been extracted into a separate release yet. I believe it has now, and that will be reflected in CF v232.
What command did you run? What SHA of cf-release have you checked out?
On Thu, Mar 10, 2016 at 12:10 PM, Benjamin Gandon <benjamin(a)gandon.org <mailto:benjamin(a)gandon.org>> wrote: Amit, just for me to be sure, why didn’t you list the java-buildpack?
Also, have the blobs properly been uploaded? I copy below the BOSH errors I get:
With binary-buildpack:
Uploading 'binary-buildpack-release/1.0.1' Recreating release from the manifest MISSING Cannot find package with checksum `413ce11236f87273ba8a9249b6e3bebb3d0db92b'
With go-buildpack:
Uploading 'go-buildpack-release/1.7.3' Recreating release from the manifest MISSING Cannot find package with checksum `300760637ee0babd5fddd474101dfa634116d9c4'
With java-buildpack:
Uploading 'java-buildpack-release/3.6' Recreating release from the manifest MISSING Cannot find package with checksum `e6ff7d79e50f0aaafa92f100f346e648c503ab17'
With nodejs-buildpack:
Uploading 'nodejs-buildpack-release/1.5.7' Recreating release from the manifest MISSING Cannot find package with checksum `b3edbcfb9435892749dffcb99f06d00fb4c59c5b'
With php-buildpack:
Uploading 'php-buildpack-release/4.3.6' Recreating release from the manifest MISSING Cannot find package with checksum `fbc784608ffa3ceafed1810b69c12a7277c86ee0'
With python-buildpack:
Uploading 'python-buildpack-release/1.5.4' Recreating release from the manifest MISSING Cannot find package with checksum `7e2377ccd9df10b21aba49c8e95338a0b1b3b92e'
With ruby-buildpack:
Uploading 'ruby-buildpack-release/1.6.14' Recreating release from the manifest MISSING Cannot find package with checksum `362282d45873634db888a609cd64d7d70e9f4be2'
With staticfile-buildpack:
Uploading 'staticfile-buildpack-release/1.3.2' Recreating release from the manifest MISSING Cannot find package with checksum `06382f7c804cc7f01a8dc78ca9c91e9b7f4712cc'
Are these on a specific blobstore I should point my deployment manifest at?
/Benjamin
Le 18 févr. 2016 à 19:31, Amit Gupta <agupta(a)pivotal.io <mailto:agupta(a)pivotal.io>> a écrit :
Hey developers,
The buildpacks team has recently extracted the buildpacks as separate releases. As we transition to deploying CF via a bunch of composed releases, for now we're making the change more transparent, by submoduling and symlinking the buildpacks releases back into cf-release. This requires some manifest changes: buildpacks are now colocated with cloud controller, rather than package dependencies of cloud controller.
If you are using spiff to generate manifests, and are not overriding the templates/jobs colocated on the api_zN jobs, you can ignore this email. If you are overriding the api_zN templates in your stub, or if you are not using spiff, you will need to add the following:
templates: - name: consul_agent release: cf + - name: go-buildpack + release: cf + - name: binary-buildpack + release: cf + - name: nodejs-buildpack + release: cf + - name: ruby-buildpack + release: cf + - name: php-buildpack + release: cf + - name: python-buildpack + release: cf + - name: staticfile-buildpack + release: cf - name: cloud_controller_ng release: cf
Please see this commit (https://github.com/cloudfoundry/cf-release/commit/549e5a8271bbf0d30efdb84f381f38c8bf22099d <https://github.com/cloudfoundry/cf-release/commit/549e5a8271bbf0d30efdb84f381f38c8bf22099d>) for more details.
Best, Amit
|
|
Re: Required manifest changes for Cloud Foundry
I did cd cf-release git fetch origin git checkout develop git pull --ff-only ./scripts/update bosh create release --with-tarball And also cd src/buildpacks/binary-buildpack-release/ git fetch origin (confirmed HEAD was pointed at origin/master) bosh create release Everything worked fine for me, meaning it was able to sync blobs down from the remote blobstores. I'm not familiar with bosh-workspace, and it's not clear to me why it's trying to upload anything (e.g. Uploading ' ruby-buildpack-release/1.6.14'). On Thu, Mar 10, 2016 at 2:48 PM, Benjamin Gandon <benjamin(a)gandon.org> wrote: And btw Amit, it looks like the java-buildpack v3.6 is here with its fellows:
https://github.com/cloudfoundry/buildpack-releases/blob/master/java-buildpack-release/releases/java-buildpack-release/java-buildpack-release-3.6.yml For example the « e6ff7d79e50f0aaafa92f100f346e648c503ab17 » SHA in the error below (when recreating the java-buildpack-release) is the one of the first blob in the release manifest above.
Le 10 mars 2016 à 23:24, Benjamin Gandon <benjamin(a)gandon.org> a écrit :
No no no, these are not SHAs of cf-release, but those of all the buildpack-releases indeed. Looks like no blobs of these releases are actually available online, are they?
I'm running the standard middle step "bosh prepare deployment" provided by bosh-workspace. (See <https://github.com/cloudfoundry-incubator/bosh-workspace>)
/Benjamin
Le 10 mars 2016 à 21:14, Amit Gupta <agupta(a)pivotal.io> a écrit :
At the time of the email, the java buildpack hadn't been extracted into a separate release yet. I believe it has now, and that will be reflected in CF v232.
What command did you run? What SHA of cf-release have you checked out?
On Thu, Mar 10, 2016 at 12:10 PM, Benjamin Gandon <benjamin(a)gandon.org> wrote:
Amit, just for me to be sure, why didn’t you list the java-buildpack?
Also, have the blobs properly been uploaded? I copy below the BOSH errors I get:
With binary-buildpack:
Uploading 'binary-buildpack-release/1.0.1' Recreating release from the manifest MISSING Cannot find package with checksum `413ce11236f87273ba8a9249b6e3bebb3d0db92b'
With go-buildpack:
Uploading 'go-buildpack-release/1.7.3' Recreating release from the manifest MISSING Cannot find package with checksum `300760637ee0babd5fddd474101dfa634116d9c4'
With java-buildpack:
Uploading 'java-buildpack-release/3.6' Recreating release from the manifest MISSING Cannot find package with checksum `e6ff7d79e50f0aaafa92f100f346e648c503ab17'
With nodejs-buildpack:
Uploading 'nodejs-buildpack-release/1.5.7' Recreating release from the manifest MISSING Cannot find package with checksum `b3edbcfb9435892749dffcb99f06d00fb4c59c5b'
With php-buildpack:
Uploading 'php-buildpack-release/4.3.6' Recreating release from the manifest MISSING Cannot find package with checksum `fbc784608ffa3ceafed1810b69c12a7277c86ee0'
With python-buildpack:
Uploading 'python-buildpack-release/1.5.4' Recreating release from the manifest MISSING Cannot find package with checksum `7e2377ccd9df10b21aba49c8e95338a0b1b3b92e'
With ruby-buildpack:
Uploading 'ruby-buildpack-release/1.6.14' Recreating release from the manifest MISSING Cannot find package with checksum `362282d45873634db888a609cd64d7d70e9f4be2'
With staticfile-buildpack:
Uploading 'staticfile-buildpack-release/1.3.2' Recreating release from the manifest MISSING Cannot find package with checksum `06382f7c804cc7f01a8dc78ca9c91e9b7f4712cc'
Are these on a specific blobstore I should point my deployment manifest at?
/Benjamin
Le 18 févr. 2016 à 19:31, Amit Gupta <agupta(a)pivotal.io> a écrit :
Hey developers,
The buildpacks team has recently extracted the buildpacks as separate releases. As we transition to deploying CF via a bunch of composed releases, for now we're making the change more transparent, by submoduling and symlinking the buildpacks releases back into cf-release. This requires some manifest changes: buildpacks are now colocated with cloud controller, rather than package dependencies of cloud controller.
If you are using spiff to generate manifests, and are not overriding the templates/jobs colocated on the api_zN jobs, you can ignore this email. If you are overriding the api_zN templates in your stub, or if you are not using spiff, you will need to add the following:
templates: - name: consul_agent release: cf + - name: go-buildpack + release: cf + - name: binary-buildpack + release: cf + - name: nodejs-buildpack + release: cf + - name: ruby-buildpack + release: cf + - name: php-buildpack + release: cf + - name: python-buildpack + release: cf + - name: staticfile-buildpack + release: cf - name: cloud_controller_ng release: cf
Please see this commit ( https://github.com/cloudfoundry/cf-release/commit/549e5a8271bbf0d30efdb84f381f38c8bf22099d) for more details.
Best, Amit
|
|