Re: [abacus-perf] Persisting Metrics performance

Saravanakumar A. Srinivasan

I would like to add one more to the list of possible solutions for further discussion:

How about extending abacus-perf to optionally persist collected performance metrics into a database? 
In my opinion, writing to a database at the source of the collected data would drastically reduce the programming complexity and would help to make the data more consistent with the source.

However, I always wonder why one would need to persist this data. any reasons? 

Saravanakumar Srinivasan (Assk),

-----KRuelY <kevinyudhiswara@...> wrote: -----
To: cf-dev@...
From: KRuelY <kevinyudhiswara@...>
Date: 11/12/2015 02:45PM
Subject: [cf-dev] [abacus-perf] Persisting Metrics performance


One of the thing that I want to do is to persist the metrics performance
collected by abacus-perf. What would be the best way to do this? I've been
through some solutions, but none of them seems to be the "correct" solution.

The scenario is this: I have an application running, and there are 2
instances of this application currently running.

To collect the metrics performance of my application, I need to aggregate
the metrics data collected by each instance's abacus-perf and store them in
a database.

The first solution is to use Turbine. Using Eureka to keep track each
instance's ip address, I can configure Turbine to use Eureka instance
discovery. This way turbine will have aggregated metrics data collected by
each instance's abacus-perf. The next thing to do is to have a separate
application 'peeks' at the turbine stream at some interval and post them to
database. The problem with this is that Turbine persists the metrics data
when there are no activity in the application, and it will flush the metrics
data when a new stats come in. Meaning that every time I peek into the
turbine stream, I have to check if I already posted these data to the

The second solution is to have each instance post independently. By using
abacus-perf's 'all()' I can set an interval that would call all(), check the
timewindow, and post accordingly. The restriction is that I can only post
the previous timewindow (since the current window is not yet done), and I
need to filter 0 data. Another restriction is that my interval cannot exceed
perf's interval. The problem with this is that
I am playing with the time interval. There would be some occasion that I
might lose some data. I'm not sure that this would cover the time where perf
flushes out old metrics when a new one comes in. I need to make sure that I
save the data before perf flushes.

Another solution is to mimic what the hystrix module is doing: Instead of
streaming the metrics to the hystrix dashboard, I would post to the
database. I have yet to try this solution.

Currently I'm not sure what is the best way to persist the metrics
performance collected by the abacus-perf with accuracy, and I would like to
have some inputs/suggestion on how to persist the metrics. Thanks!

View this message in context:
Sent from the CF Dev mailing list archive at

Join { to automatically receive all group messages.