Re: Generic data points for dropsonde
Johannes Tuchscherer
The current way of sending metrics as either Values or Counters through the
toggle quoted message
Show quoted text
pipeline makes the development of a downstream consumer (=nozzle) pretty easy. If you look at the datadog nozzle[0], it just takes all ValueMetrics and Counters and sends them off to datadog. The nozzle does not have to know anything about these metrics (e.g. their origin, name, or layout). Adding a new way to send metrics as a nested object would make the downstream implementation certainly more complicated. In that case, the nozzle developer has to know what metrics are included inside the generic point (basically the schema of the metric) and treat each point accordingly. For example, if I were to write a nozzle that emits metrics to Graphite with a StatsD client (like it is done here[1]), I need to know if my int64 value is a Gauge or a Counter. Also, my consumer is under constant risk of breaking when the upstream schema changes. We are already facing this problem with the container metrics. But at least the container metrics are in a defined format that is well documented and not likely to change. I agree with you, though, the the dropsonde protocol could use a mechanism for easier extension. Having a GenericPoint and/or GenericEvent seems like a good idea that I whole-heartedly support. I would just like to stay away from nested metrics. I think the cost of adding more logic into the downstream consumer (and making it harder to maintain) is not worth the benefit of a more concise metric transport. [0] https://github.com/cloudfoundry-incubator/datadog-firehose-nozzle [1] https://github.com/CloudCredo/graphite-nozzle On Tue, Sep 1, 2015 at 5:52 PM, Benjamin Black <bblack(a)pivotal.io> wrote:
great questions, dwayne. |
|