Configure the OpenTelemetry Collector

As a regular OpenTelemetry Collector, several properties of Hardware Sentry OpenTelemetry Collector are configurable:

Internal architecture of the Hardware Sentry OpenTelemetry Collector

This version of Hardware Sentry OpenTelemetry Collector leverages version 0.36.0 of OpenTelemetry.

By default, Hardware Sentry OpenTelemetry Collector's configuration file is config/otel-config.yaml. You can start the OpenTelemetry Collector with the path to an alternate file (see Installation).

Receivers

Hardware Sentry Exporter for Prometheus

The primary source of the data is prometheus_exec, which is configured to execute Hardware Sentry's internal Exporter for Prometheus, and scrape the collected metrics. By default, it executes the internal Exporter for Prometheus on port TCP/24375 and scrapes the metrics every 2 minutes.

  prometheus_exec/hws-exporter:
    exec: "\"bin/hws-exporter\" --target.config.file=\"config/hws-config.yaml\" --server.port={{port} }"
    port: 24375
    scrape_interval: 2m

There is no need to edit this section, unless you need to configure the internal Hardware Sentry Exporter for Prometheus to use a different configuration file than the default one.

You declare multiple instances of prometheus_exec, which will run separate instances of Hardware Sentry Exporter for Prometheus, each on a different port. You will need to specify alternate configuration files and ports, as in the example below:

  prometheus_exec/hws-exporter-1:
    exec: "\"bin/hws-exporter\" --target.config.file=\"config/hws-config-1.yaml\" --server.port={{port} }"
    port: 9011
    scrape_interval: 2m
  prometheus_exec/hws-exporter-2:
    exec: "\"bin/hws-exporter\" --target.config.file=\"config/hws-config-2.yaml\" --server.port={{port} }"
    port: 9012
    scrape_interval: 2m

# [...]

service:
  extensions: [health_check]
  pipelines:
    metrics:
      receivers: [prometheus_exec/hws-exporter-1,prometheus_exec/hws-exporter-2]

OpenTelemetry Collector Internal Exporter for Prometheus

OpenTelemetry Collector's own internal Exporter for Prometheus, which runs on port TCP/8888 (this is not configurable), is an optional source of data. This exporter provides internal metrics about the collector activity (see Health Check). It's referred to as prometheus/internal in the pipeline and leverages the standard prometheus receiver.

  prometheus/internal:
    config:
      scrape_configs:
        - job_name: otel-collector-internal
          scrape_interval: 60s
          static_configs:
            - targets: ["0.0.0.0:8888"]

Processors

By default, the collected metrics go through 3 processors:

  • memory_limiter to limit the memory consumed by the OpenTelemetry Collector process (configurable)
  • batch to process data in batches of 10 seconds (configurable)
  • metricstransform to enrich the collected metrics

This metricstransform processor is particularly useful when the receiving platform requires specific labels on the metrics that are not set by default by Hardware Sentry Exporter for Prometheus. The metricstransform processor has many options to add, rename, delete labels and metrics.

Note that Hardware Sentry Exporter for Prometheus can also be configured to add additional labels to the collected metrics.

Exporters

The exporters section defines the destination of collected metrics. Hardware Sentry OpenTelemetry Collector version 1.0 includes support for the below exporters:

You can configure several exporters in the same instance of the OpenTelemetry Collector so the collected metrics are sent to multiple platforms.

Use the above links to learn how to configure these exporters. Specific integration scenarios are also described for:

The Pipeline

Configured receivers, processors and exporters are taken into account if and only if they are declared in the pipeline:

service:
  extensions: [health_check]
  pipelines:
    metrics:
      receivers: [prometheus_exec/hws-exporter]
      processors: [memory_limiter,batch,metricstransform]
      exporters: [prometheusremotewrite/your-server]
No results.