Re: Testing behaviour of a production CF environment

Graham Bleach

On 1 June 2016 at 09:22, Daniel Jones <daniel.jones(a)>

Running acceptance tests in production is absolutely what I'd recommend -
in fact I drove that point home in my talk in Santa Clara last week (I can
forward on the link once the YouTube videos are up).
Sounds very relevant, I'll look forward to the video.

I've worked with customers who didn't use the official CATS, but instead
favoured writing their own in the BDD framework of their choice. We didn't
find them too onerous to develop and maintain, and an example test would be:

1. Push fixture app
2. Start app
3. Hit app, validate response
4. Hit URL on app to write to a given data service
5. Hit URL to read written value, validate
6. Stop app
7. Delete app

This exercised some of the core user-facing behaviour, and also those of
data services (search for Pivotal's apps like cf-redis-example-app
<> which follow the
same pattern). We had additional tests that would log a given unique string
through an app, and then hit the log aggregation system to validate that it
had made its way through. The tests were small, so we had more granular
control over the frequency of each test, and got faster feedback through
We have added tests for things we've built / configured, we borrowed a fair
amount in style from CATS:

In principle I think the conversations / decisions about which behaviour
should be tested is valuable, as is having tests written in a language /
framework that's understood by the team, so I can understand why people
would do this.

I don't think this works for us for things that are already tested in CATS
though, as it feels like duplication of effort, both to write and maintain
the tests, which is why I'm interested in the idea of moving tests around
within CATS to enable people to run a subset of tests that we consider to
be production-safe.


Join to automatically receive all group messages.