Backends

Markus comes with several backends. You can also write your own.

Logging metrics

class markus.backends.logging.LoggingMetrics(options=None, filters=None)

Metrics backend that publishes to Python logging.

To use, add this to your backends list:

{
    "class": "markus.backends.logging.LoggingMetrics",
    "options": {
        "logger_name": "markus",
        "leader": "METRICS",
    }
}

The markus.backends.logging.LoggingMetrics backend generates logging messages in this format:

leader|timestamp|metric_type|stat|value|#tags

For example:

METRICS|2017-03-06 11:30:00|histogram|foo|4321|#key1:val

This will log at the logging.INFO level.

Options:

  • logger_name: the name for the logger

    Defaults to "markus".

  • leader: string at the start of the metrics line

    This makes it easier to parse logs for metrics data–you look for the leader and everything after that is parseable data.

    Defaults to "METRICS".

__init__(options=None, filters=None)

Implement this.

Parameters:
  • options (dict) – user-specified options

  • filters (list) – filters to apply to this backend

emit(record)

Emit record to backend.

Implement this in your backend.

Parameters:

record (MetricsRecord) – the record to be published

class markus.backends.logging.LoggingRollupMetrics(options=None, filters=None)

Experimental logging backend for rolling up stats over a period.

To use, add this to your backends list:

{
    "class": "markus.backends.logging.LoggingRollupMetrics",
    "options": {
        "logger_name": "markus",
        "leader": "ROLLUP",
        "flush_interval": 10
    }
}

The markus.backends.logging.LoggingRollupMetrics backend generates rollups every flush_interval of stats generated during that period.

For incr stats, it shows count and rate.

For gauge stats, it shows count, current value, min value, and max value for the period.

For timing and histogram stats, it shows count, min, average, median, 95%, and max for the period.

This will log at the logging.INFO level.

Options:

  • logger_name: the name for the logger

    Defaults to "markus".

  • leader: string at the start of the metrics line

    This makes it easier to parse logs for metrics data–you look for the leader and everything after that is parseable data.

    Defaults to "ROLLUP".

  • flush_interval: interval to generate rollup data

    markus.backends.logging.LoggingRollupMetrics will spit out rollup data every flush_interval seconds.

    Defaults to 10 seconds.

Note

This backend is experimental, probably has bugs, and may change over time.

__init__(options=None, filters=None)

Implement this.

Parameters:
  • options (dict) – user-specified options

  • filters (list) – filters to apply to this backend

emit(record)

Emit record to backend.

Implement this in your backend.

Parameters:

record (MetricsRecord) – the record to be published

rollup()

Roll up stats and log them.

Statsd metrics

class markus.backends.statsd.StatsdMetrics(options=None, filters=None)

Use pystatsd client for statsd pings.

This requires the pystatsd module and requirements to be installed. To install those bits, do:

$ pip install markus[statsd]

To use, add this to your backends list:

{
    "class": "markus.backends.statsd.StatsdMetrics",
    "options": {
        "statsd_host": "statsd.example.com",
        "statsd_port": 8125,
        "statsd_prefix": None,
        "statsd_maxudpsize": 512,
    }
}

Options:

  • statsd_host: the hostname for the statsd daemon to connect to

    Defaults to "localhost".

  • statsd_port: the port for the statsd daemon to connect to

    Defaults to 8125.

  • statsd_prefix: the prefix to use for statsd data

    Defaults to None.

  • statsd_maxudpsize: the maximum data to send per packet

    Defaults to 512.

Note

The StatsdMetrics backend does not support tags. All tags will be dropped when metrics are emitted by this backend.

Note

statsd doesn’t support histograms so histogram metrics are reported as timing metrics.

__init__(options=None, filters=None)

Implement this.

Parameters:
  • options (dict) – user-specified options

  • filters (list) – filters to apply to this backend

emit(record)

Emit record to backend.

Implement this in your backend.

Parameters:

record (MetricsRecord) – the record to be published

Datadog metrics

class markus.backends.datadog.DatadogMetrics(options=None, filters=None)

Use the Datadog DogStatsd client for statsd pings.

This requires the Datadog backend and requirements be installed. To install those bits, do:

$ pip install markus[datadog]

To use, add this to your backends list:

{
    "class": "markus.backends.datadog.DatadogMetrics",
    "options": {
        "statsd_host": "localhost",
        "statsd_port": 8125,
        "statsd_namespace": "",
    }
}

Options:

  • statsd_host: the hostname for the statsd daemon to connect to

    Defaults to "localhost".

  • statsd_port: the port for the statsd daemon to connect to

    Defaults to 8125.

  • statsd_namespace: the namespace to use for statsd data

    Defaults to "".

  • origin_detection_enabled: whether or not the client should fill the container field (part of datadog protocol v1.2).

    Defaults to False.

__init__(options=None, filters=None)

Implement this.

Parameters:
  • options (dict) – user-specified options

  • filters (list) – filters to apply to this backend

emit(record)

Emit record to backend.

Implement this in your backend.

Parameters:

record (MetricsRecord) – the record to be published

Cloudwatch metrics

class markus.backends.cloudwatch.CloudwatchMetrics(options=None, filters=None)

Publish metrics to stdout for Cloudwatch.

This prints to stdout in this format:

MONITORING|unix_epoch_timestamp|value|metric_type|my.metric.name|#tag1:value,tag2

It lets you generate metrics for reading/consuming in Cloudwatch.

For example, Datadog can consume metrics formatted this way from Cloudwatch allowing you to generate metrics in AWS Lambda functions and have them show up in Datadog.

To use, add this to your backends list:

{
    "class": "markus.backends.cloudwatch.CloudwatchMetrics",
}

This backend doesn”t take any options.

Note

Datadog’s Cloudwatch through Lambda logs supports four metrics types: count, gauge, histogram, and check. Thus all timing metrics are treated as histogram metrics.

emit(record)

Emit record to backend.

Implement this in your backend.

Parameters:

record (MetricsRecord) – the record to be published

Writing your own

  1. Subclass markus.backends.BackendBase.

  2. Implement __init__. It takes a single “options” dict with stuff the user configured.

  3. Implement emit and have it do whatever is appropriate in the context of your backend.

class markus.backends.BackendBase(options=None, filters=None)

Markus Backend superclass that defines API backends should follow.

__init__(options=None, filters=None)

Implement this.

Parameters:
  • options (dict) – user-specified options

  • filters (list) – filters to apply to this backend

emit(record)

Emit record to backend.

Implement this in your backend.

Parameters:

record (MetricsRecord) – the record to be published

The records that get emitted are markus.main.MetricsRecord instances.

Here’s an example backend that prints metrics to stdout:

>>> import markus
>>> from markus.backends import BackendBase
>>> from markus.main import MetricsRecord

>>> class StdoutMetrics(BackendBase):
...     def __init__(self, options=None, filters=None):
...         options = options or {}
...         self.filters = filters or []
...         self.prefix = options.get('prefix', '')
...
...     def emit(self, record):
...         print('%s %s %s %s tags=%s' % (
...             record.stat_type,
...             self.prefix,
...             record.key,
...             record.value,
...             record.tags
...         ))
...
>>> markus.configure([{'class': StdoutMetrics, 'options': {'prefix': 'foo'}}], raise_errors=True)

>>> metrics = markus.get_metrics('test')
>>> metrics.incr('key1')
incr foo test.key1 1 tags=[]