Date
1 - 4 of 4
Spotting performance regressions
Michal Tekel
Hi,
I wanted to check if you know about a good way to spot performance regressions on CF and if anyone is doing it (at least partially) already. We are mainly interested in discovering user-facing degradation. That is, app operations (deploy, scale, delete, manage routes) and routing layer operations (latency per request, total throughput, SSL termination capacity) - esp. in connection with (heavier) routing services usage. We did a brief investigation about the best way to automate this kind of regression testing, but didn't find any complete solution. We had a look at [CF PATs](1) tool and made a [container](2) it can be run from. But that only gives us metrics on app operations and no way of spotting regressions or automating the tests. We were wondering if perhaps someone already does any kind of performance measurements as a part of their CF build/deploy pipelines/process and what their experience with tracking the results is... [1] https://github.com/cloudfoundry-incubator/pat [2] https://hub.docker.com/r/keymon/governmentpaas-cf-pats/
|
|
Hi Michal,
I understand the routing team is working on automated performance tests that aims at detecting performance regressions on the gorouter. We have not yet set up execution of those at Orange yet. More in the pointers below. The routing team was considering to open sourcing the routing ci repo, I'm not sure of the current status now. https://github.com/cloudfoundry-incubator/routing-perf-release https://cf-routing.ci.cf-app.com/pipelines/routing/jobs/run-cf-load-test https://cloudfoundry.slack.com/archives/routing/p1455232667000627 https://www.pivotaltracker.com/n/projects/1358110 with search: label:"gorouter performance" includedone:true includes current perf regression and effort for automated perf testing and determining max request rate and concurrent requests Hope this helps, Guillaume. Guillaume. On Mon, Apr 4, 2016 at 4:08 PM, Michal Tekel < michal.tekel(a)digital.cabinet-office.gov.uk> wrote: Hi,
|
|
Michal Tekel
Thanks!
toggle quoted messageShow quoted text
That helps a lot. Looks like (from slack chat and pivotal tracker) that the routingteam already has a job set-up which runs perf. tests each hour and posts results to datadog, which is something very similar to what we were looking to do. I got in touch with them on slack... Thanks again...
On 5 April 2016 at 17:39, Guillaume Berche <bercheg(a)gmail.com> wrote:
Hi Michal,
|
|
Mark St.Godard
Hi Michal
(sorry for late response.. I caught you on Slack as well, but responding to dev list) Yes we have recently added a stage in our continuous delivery pipeline to catch performance regressions. This stage right now is not using routing-perf-release, it is currently using a command-line (golang-based) http load generator and we are currently measuring and emitting requests / sec per run. The stage also will fail if the results of the run are below a specific threshold. We also have a datadog dashboard with all the routing related metrics we are getting from the firehose. Important thing to note is that this is just the beginning of our performance analysis related work on gorouter, Shannon (Routing PM) is heading up our next initiative which focuses on gorouter performance and metrics. Part of this work will likely include enhancing our performance automation test suite and more info on metrics. I'll let Shannon chime in if he had additional info to add. Cheers On Wed, Apr 6, 2016 at 5:47 AM, Michal Tekel < michal.tekel(a)digital.cabinet-office.gov.uk> wrote: Thanks!
|
|