×

The Tempo Operator uses a custom resource definition (CRD) file that defines the architecture and configuration settings to be used when creating and deploying the distributed tracing platform (Tempo) resources. You can install the default configuration or modify the file.

Customizing your deployment

For information about configuring the back-end storage, see Understanding persistent storage and the appropriate configuration topic for your chosen storage option.

Default configuration options

The Tempo custom resource (CR) defines the architecture and settings to be used when creating the distributed tracing platform (Tempo) resources. You can modify these parameters to customize your distributed tracing platform (Tempo) implementation to your business needs.

Example of a generic Tempo YAML file
apiVersion: tempo.grafana.com/v1alpha1
kind: TempoStack
metadata:
  name: name
spec:
  storage: {}
  resources: {}
  storageSize: 200M
  replicationFactor: 1
  retention: {}
  template:
      distributor:{}
      ingester: {}
      compactor: {}
      querier: {}
      queryFrontend: {}
      gateway: {}
Table 1. Tempo parameters
Parameter Description Values Default value
apiVersion:

API version to use when creating the object.

tempo.grafana.com/v1alpha1

tempo.grafana.com/v1alpha1

kind:

Defines the kind of Kubernetes object to create.

tempo

metadata:

Data that uniquely identifies the object, including a name string, UID, and optional namespace.

OpenShift Container Platform automatically generates the UID and completes the namespace with the name of the project where the object is created.

name:

Name for the object.

Name of your TempoStack instance.

tempo-all-in-one-inmemory

spec:

Specification for the object to be created.

Contains all of the configuration parameters for your TempoStack instance. When a common definition for all Tempo components is required, it is defined under the spec node. When the definition relates to an individual component, it is placed under the spec/template/<component> node.

N/A

resources:

Resources assigned to the TempoStack instance.

storageSize:

Storage size for ingester PVCs.

replicationFactor:

Configuration for the replication factor.

retention:

Configuration options for retention of traces.

storage:

Configuration options that define the storage. All storage-related options must be placed under storage and not under the allInOne or other component options.

template.distributor:

Configuration options for the Tempo distributor.

template.ingester:

Configuration options for the Tempo ingester.

template.compactor:

Configuration options for the Tempo compactor.

template.querier:

Configuration options for the Tempo querier.

template.queryFrontend:

Configuration options for the Tempo query-frontend.

template.gateway:

Configuration options for the Tempo gateway.

Minimum required configuration

The following is the required minimum for creating a distributed tracing platform (Tempo) deployment with the default settings:

apiVersion: tempo.grafana.com/v1alpha1
kind: TempoStack
metadata:
  name: simplest
spec:
  storage: (1)
    secret:
      name: minio
      type: s3
  resources:
    total:
      limits:
        memory: 2Gi
        cpu: 2000m
  template:
    queryFrontend:
      jaegerQuery:
        enabled: true
        ingress:
          type: route
1 This section specifies the deployed object storage back end, which requires a created secret with credentials for access to the object storage.

Storage configuration

You can configure object storage for the distributed tracing platform (Tempo) in the TempoStack custom resource under spec.storage. You can choose from among several storage providers that are supported.

Table 2. General storage parameters used by the Tempo Operator to define distributed tracing storage
Parameter Description Values Default value
spec:
  storage:
    secret
      type:

Type of storage to use for the deployment.

memory. Memory storage is only appropriate for development, testing, demonstrations, and proof of concept environments because the data does not persist when the pod is shut down.

memory

storage:
  secretname:

Name of the secret that contains the credentials for the set object storage type.

N/A

storage:
  tls:
    caName:

CA is the name of a ConfigMap object containing a CA certificate.

Table 3. Required secret parameters
Storage provider Secret parameters

Red Hat OpenShift Data Foundation

name: tempostack-dev-odf # example

bucket: <bucket_name> # requires an ObjectBucketClaim

endpoint: https://s3.openshift-storage.svc

access_key_id: <data_foundation_access_key_id>

access_key_secret: <data_foundation_access_key_secret>

MinIO

See MinIO Operator.

name: tempostack-dev-minio # example

bucket: <minio_bucket_name> # MinIO documentation

endpoint: <minio_bucket_endpoint>

access_key_id: <minio_access_key_id>

access_key_secret: <minio_access_key_secret>

Amazon S3

name: tempostack-dev-s3 # example

bucket: <s3_bucket_name> # Amazon S3 documentation

endpoint: <s3_bucket_endpoint>

access_key_id: <s3_access_key_id>

access_key_secret: <s3_access_key_secret>

Microsoft Azure Blob Storage

name: tempostack-dev-azure # example

container: <azure_blob_storage_container_name> # Microsoft Azure documentation

account_name: <azure_blob_storage_account_name>

account_key: <azure_blob_storage_account_key>

Google Cloud Storage on Google Cloud Platform (GCP)

name: tempostack-dev-gcs # example

bucketname: <google_cloud_storage_bucket_name> # requires a bucket created in a GCP project

key.json: <path/to/key.json> # requires a service account in the bucket’s GCP project for GCP authentication

Query configuration options

Two components of the distributed tracing platform (Tempo), the querier and query frontend, manage queries. You can configure both of these components.

The querier component finds the requested trace ID in the ingesters or back-end storage. Depending on the set parameters, the querier component can query both the ingesters and pull bloom or indexes from the back end to search blocks in object storage. The querier component exposes an HTTP endpoint at GET /querier/api/traces/<trace_id>, but it is not expected to be used directly. Queries must be sent to the query frontend.

Table 4. Configuration parameters for the querier component
Parameter Description Values

nodeSelector

The simple form of the node-selection constraint.

type: object

replicas

The number of replicas to be created for the component.

type: integer; format: int32

tolerations

Component-specific pod tolerations.

type: array

The query frontend component is responsible for sharding the search space for an incoming query. The query frontend exposes traces via a simple HTTP endpoint: GET /api/traces/<trace_id>. Internally, the query frontend component splits the blockID space into a configurable number of shards and then queues these requests. The querier component connects to the query frontend component via a streaming gRPC connection to process these sharded queries.

Table 5. Configuration parameters for the query frontend component
Parameter Description Values

component

Configuration of the query frontend component.

type: object

component.nodeSelector

The simple form of the node selection constraint.

type: object

component.replicas

The number of replicas to be created for the query frontend component.

type: integer; format: int32

component.tolerations

Pod tolerations specific to the query frontend component.

type: array

jaegerQuery

The options specific to the Jaeger Query component.

type: object

jaegerQuery.enabled

When enabled, creates the Jaeger Query component,jaegerQuery.

type: boolean

jaegerQuery.ingress

The options for the Jaeger Query ingress.

type: object

jaegerQuery.ingress.annotations

The annotations of the ingress object.

type: object

jaegerQuery.ingress.host

The hostname of the ingress object.

type: string

jaegerQuery.ingress.ingressClassName

The name of an IngressClass cluster resource. Defines which ingress controller serves this ingress resource.

type: string

jaegerQuery.ingress.route

The options for the OpenShift route.

type: object

jaegerQuery.ingress.route.termination

The termination type. The default is edge.

type: string (enum: insecure, edge, passthrough, reencrypt)

jaegerQuery.ingress.type

The type of ingress for the Jaeger Query UI. The supported types are ingress, route, and none.

type: string (enum: ingress, route)

jaegerQuery.monitorTab

The monitor tab configuration.

type: object

jaegerQuery.monitorTab.enabled

Enables the monitor tab in the Jaeger console. The PrometheusEndpoint must be configured.

type: boolean

jaegerQuery.monitorTab.prometheusEndpoint

The endpoint to the Prometheus instance that contains the span rate, error, and duration (RED) metrics. For example, https://thanos-querier.openshift-monitoring.svc.cluster.local:9091.

type: string

Example configuration of the query frontend component in a TempoStack CR
apiVersion: tempo.grafana.com/v1alpha1
kind: TempoStack
metadata:
  name: simplest
spec:
  storage:
    secret:
      name: minio
      type: s3
  storageSize: 200M
  resources:
    total:
      limits:
        memory: 2Gi
        cpu: 2000m
  template:
    queryFrontend:
      jaegerQuery:
        enabled: true
        ingress:
          route:
            termination: edge
          type: route

Configuration of the monitor tab in Jaeger UI

Trace data contains rich information, and the data is normalized across instrumented languages and frameworks. Therefore, request rate, error, and duration (RED) metrics can be extracted from traces. The metrics can be visualized in Jaeger console in the Monitor tab.

The metrics are derived from spans in the OpenTelemetry Collector that are scraped from the Collector by the Prometheus deployed in the user-workload monitoring stack. The Jaeger UI queries these metrics from the Prometheus endpoint and visualizes them.

OpenTelemetry Collector configuration

The OpenTelemetry Collector requires configuration of the spanmetrics connector that derives metrics from traces and exports the metrics in the Prometheus format.

OpenTelemetry Collector custom resource for span RED
kind: OpenTelemetryCollector
apiVersion: opentelemetry.io/v1alpha1
metadata:
  name: otel
spec:
  mode: deployment
  observability:
    metrics:
      enableMetrics: true (1)
  config: |
    connectors:
      spanmetrics: (2)
        metrics_flush_interval: 15s

    receivers:
      otlp: (3)
        protocols:
          grpc:
          http:

    exporters:
      prometheus: (4)
        endpoint: 0.0.0.0:8889
        add_metric_suffixes: false
        resource_to_telemetry_conversion:
          enabled: true # by default resource attributes are dropped

      otlp:
        endpoint: "tempo-simplest-distributor:4317"
        tls:
          insecure: true

    service:
      pipelines:
        traces:
          receivers: [otlp]
          exporters: [otlp, spanmetrics] (5)
        metrics:
          receivers: [spanmetrics] (6)
          exporters: [prometheus]
1 Creates the ServiceMonitor custom resource to enable scraping of the Prometheus exporter.
2 The Spanmetrics connector receives traces and exports metrics.
3 The OTLP receiver to receive spans in the OpenTelemetry protocol.
4 The Prometheus exporter is used to export metrics in the Prometheus format.
5 The Spanmetrics connector is configured as exporter in traces pipeline.
6 The Spanmetrics connector is configured as receiver in metrics pipeline.

Tempo configuration

The TempoStack custom resource must specify the following: the Monitor tab is enabled, and the Prometheus endpoint is set to the Thanos querier service to query the data from the user-defined monitoring stack.

TempoStack custom resource with the enabled Monitor tab
kind:  TempoStack
apiVersion: tempo.grafana.com/v1alpha1
metadata:
  name: simplest
spec:
  template:
    queryFrontend:
      jaegerQuery:
        enabled: true
        monitorTab:
          enabled: true (1)
          prometheusEndpoint: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091 (2)
        ingress:
          type: route
1 Enables the monitoring tab in the Jaeger console.
2 The service name for Thanos Querier from user-workload monitoring.

Span RED metrics and alerting rules

The metrics generated by the spanmetrics connector are usable with alerting rules. For example, for alerts about a slow service or to define service level objectives (SLOs), the connector creates a duration_bucket histogram and the calls counter metric. These metrics have labels that identify the service, API name, operation type, and other attributes.

Table 6. Labels of the metrics created in the spanmetrics connector
Label Description Values
service_name

Service name set by the otel_service_name environment variable.

frontend

span_name

Name of the operation.

  • /

  • /customer

span_kind

Identifies the server, client, messaging, or internal operation.

  • SPAN_KIND_SERVER

  • SPAN_KIND_CLIENT

  • SPAN_KIND_PRODUCER

  • SPAN_KIND_CONSUMER

  • SPAN_KIND_INTERNAL

Example PrometheusRule CR that defines an alerting rule for SLO when not serving 95% of requests within 2000ms on the front-end service
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
  name: span-red
spec:
  groups:
  - name: server-side-latency
    rules:
    - alert: SpanREDFrontendAPIRequestLatency
      expr: histogram_quantile(0.95, sum(rate(duration_bucket{service_name="frontend", span_kind="SPAN_KIND_SERVER"}[5m])) by (le, service_name, span_name)) > 2000 (1)
      labels:
        severity: Warning
      annotations:
        summary: "High request latency on {{$labels.service_name}} and {{$labels.span_name}}"
        description: "{{$labels.instance}} has 95th request latency above 2s (current value: {{$value}}s)"
1 The expression for checking if 95% of the front-end server response time values are below 2000 ms. The time range ([5m]) must be at least four times the scrape interval and long enough to accommodate a change in the metric.

Multitenancy

Multitenancy with authentication and authorization is provided in the Tempo Gateway service. The authentication uses OpenShift OAuth and the Kubernetes TokenReview API. The authorization uses the Kubernetes SubjectAccessReview API.

Sample Tempo CR with two tenants, dev and prod
apiVersion: tempo.grafana.com/v1alpha1
kind:  TempoStack
metadata:
  name: simplest
spec:
  tenants:
    mode: openshift (1)
    authentication: (2)
      - tenantName: dev (3)
        tenantId: "1610b0c3-c509-4592-a256-a1871353dbfa" (4)
      - tenantName: prod
        tenantId: "1610b0c3-c509-4592-a256-a1871353dbfb"
  template:
    gateway:
      enabled: true (5)
    queryFrontend:
      jaegerQuery:
        enabled: true
1 Must be set to openshift.
2 The list of tenants.
3 The tenant name. Must be provided in the X-Scope-OrgId header when ingesting the data.
4 A unique tenant ID.
5 Enables a gateway that performs authentication and authorization. The Jaeger UI is exposed at http://<gateway-ingress>/api/traces/v1/<tenant-name>/search.

The authorization configuration uses the ClusterRole and ClusterRoleBinding of the Kubernetes Role-Based Access Control (RBAC). By default, no users have read or write permissions.

Sample of the read RBAC configuration that allows authenticated users to read the trace data of the dev and prod tenants
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: tempostack-traces-reader
rules:
  - apiGroups:
      - 'tempo.grafana.com'
    resources: (1)
      - dev
      - prod
    resourceNames:
      - traces
    verbs:
      - 'get' (2)
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tempostack-traces-reader
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: tempostack-traces-reader
subjects:
  - kind: Group
    apiGroup: rbac.authorization.k8s.io
    name: system:authenticated (3)
1 Lists the tenants.
2 The get value enables the read operation.
3 Grants all authenticated users the read permissions for trace data.
Sample of the write RBAC configuration that allows the otel-collector service account to write the trace data for the dev tenant
apiVersion: v1
kind: ServiceAccount
metadata:
  name: otel-collector (1)
  namespace: otel
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: tempostack-traces-write
rules:
  - apiGroups:
      - 'tempo.grafana.com'
    resources: (2)
      - dev
    resourceNames:
      - traces
    verbs:
      - 'create' (3)
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tempostack-traces
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: tempostack-traces-write
subjects:
  - kind: ServiceAccount
    name: otel-collector
    namespace: otel
1 The service account name for the client to use when exporting trace data. The client must send the service account token, /var/run/secrets/kubernetes.io/serviceaccount/token, as the bearer token header.
2 Lists the tenants.
3 The create value enables the write operation.

Trace data can be sent to the Tempo instance from the OpenTelemetry Collector that uses the service account with RBAC for writing the data.

Sample OpenTelemetry CR configuration
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
  name: cluster-collector
  namespace: tracing-system
spec:
  mode: deployment
  serviceAccount: otel-collector
  config: |
      extensions:
        bearertokenauth:
          filename: "/var/run/secrets/kubernetes.io/serviceaccount/token"
      exporters:
        otlp/dev:
          endpoint: tempo-simplest-gateway.tempo.svc.cluster.local:8090
          tls:
            insecure: false
            ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt"
          auth:
            authenticator: bearertokenauth
          headers:
            X-Scope-OrgID: "dev"
      service:
        extensions: [bearertokenauth]
        pipelines:
          traces:
            exporters: [otlp/dev]

Configuring monitoring and alerts

The Tempo Operator supports monitoring and alerts about each TempoStack component such as distributor, ingester, and so on, and exposes upgrade and operational metrics about the Operator itself.

Configuring the TempoStack metrics and alerts

You can enable metrics and alerts of TempoStack instances.

Prerequisites
  • Monitoring for user-defined projects is enabled in the cluster.

Procedure
  1. To enable metrics of a TempoStack instance, set the spec.observability.metrics.createServiceMonitors field to true:

    apiVersion: tempo.grafana.com/v1alpha1
    kind: TempoStack
    metadata:
      name: <name>
    spec:
      observability:
        metrics:
          createServiceMonitors: true
  2. To enable alerts for a TempoStack instance, set the spec.observability.metrics.createPrometheusRules field to true:

    apiVersion: tempo.grafana.com/v1alpha1
    kind: TempoStack
    metadata:
      name: <name>
    spec:
      observability:
        metrics:
          createPrometheusRules: true
Verification

You can use the Administrator view of the web console to verify successful configuration:

  1. Go to ObserveTargets, filter for Source: User, and check that ServiceMonitors in the format tempo-<instance_name>-<component> have the Up status.

  2. To verify that alerts are set up correctly, go to ObserveAlertingAlerting rules, filter for Source: User, and check that the Alert rules for the TempoStack instance components are available.

Configuring the Tempo Operator metrics and alerts

When installing the Tempo Operator from the web console, you can select the Enable Operator recommended cluster monitoring on this Namespace checkbox, which enables creating metrics and alerts of the Tempo Operator.

If the checkbox was not selected during installation, you can manually enable metrics and alerts even after installing the Tempo Operator.

Procedure
  • Add the openshift.io/cluster-monitoring: "true" label in the project where the Tempo Operator is installed, which is openshift-tempo-operator by default.

Verification

You can use the Administrator view of the web console to verify successful configuration:

  1. Go to ObserveTargets, filter for Source: Platform, and search for tempo-operator, which must have the Up status.

  2. To verify that alerts are set up correctly, go to ObserveAlertingAlerting rules, filter for Source: Platform, and locate the Alert rules for the Tempo Operator.