apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
name: MyConfigFile
spec:
strategy: production (1)
The Red Hat OpenShift distributed tracing platform (Jaeger) is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. |
The Red Hat OpenShift distributed tracing platform (Jaeger) Operator uses a custom resource definition (CRD) file that defines the architecture and configuration settings to be used when creating and deploying the distributed tracing platform (Jaeger) resources. You can install the default configuration or modify the file.
If you have installed distributed tracing platform as part of Red Hat OpenShift Service Mesh, you can perform basic configuration as part of the ServiceMeshControlPlane, but for complete control, you must configure a Jaeger CR and then reference your distributed tracing configuration file in the ServiceMeshControlPlane.
The Red Hat OpenShift distributed tracing platform (Jaeger) has predefined deployment strategies. You specify a deployment strategy in the custom resource file. When you create a distributed tracing platform (Jaeger) instance, the Operator uses this configuration file to create the objects necessary for the deployment.
apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
name: MyConfigFile
spec:
strategy: production (1)
1 | Deployment strategy. |
The Red Hat OpenShift distributed tracing platform (Jaeger) Operator currently supports the following deployment strategies:
allInOne
- This strategy is intended for development, testing, and demo purposes; it is not intended for production use. The main backend components, Agent, Collector, and Query service, are all packaged into a single executable which is configured, by default. to use in-memory storage.
In-memory storage is not persistent, which means that if the distributed tracing platform (Jaeger) instance shuts down, restarts, or is replaced, that your trace data will be lost. And in-memory storage cannot be scaled, since each pod has its own memory. For persistent storage, you must use the |
production
The production strategy is intended for production environments, where long term storage of trace data is important, as well as a more scalable and highly available architecture is required. Each of the backend components is therefore deployed separately. The Agent can be injected as a sidecar on the instrumented application. The Query and Collector services are configured with a supported storage type - currently Elasticsearch. Multiple instances of each of these components can be provisioned as required for performance and resilience purposes.
streaming
The streaming strategy is designed to augment the production strategy by providing a streaming capability that effectively sits between the Collector and the Elasticsearch backend storage. This provides the benefit of reducing the pressure on the backend storage, under high load situations, and enables other trace post-processing capabilities to tap into the real time span data directly from the streaming platform (AMQ Streams/ Kafka).
|
The custom resource definition (CRD) defines the configuration used when you deploy an instance of Red Hat OpenShift distributed tracing platform. The default CR is named jaeger-all-in-one-inmemory
and it is configured with minimal resources to ensure that you can successfully install it on a default OpenShift Container Platform installation. You can use this default configuration to create a Red Hat OpenShift distributed tracing platform (Jaeger) instance that uses the AllInOne
deployment strategy, or you can define your own custom resource file.
In-memory storage is not persistent. If the Jaeger pod shuts down, restarts, or is replaced, your trace data will be lost. For persistent storage, you must use the |
The Red Hat OpenShift distributed tracing platform (Jaeger) Operator has been installed.
You have reviewed the instructions for how to customize the deployment.
You have access to the cluster as a user with the cluster-admin
role.
Log in to the OpenShift Container Platform web console as a user with the cluster-admin
role.
Create a new project, for example tracing-system
.
If you are installing as part of Service Mesh, the distributed tracing platform resources must be installed in the same namespace as the |
Go to Home → Projects.
Click Create Project.
Enter tracing-system
in the Name field.
Click Create.
Navigate to Operators → Installed Operators.
If necessary, select tracing-system
from the Project menu. You may have to wait a few moments for the Operators to be copied to the new project.
Click the Red Hat OpenShift distributed tracing platform (Jaeger) Operator. On the Details tab, under Provided APIs, the Operator provides a single link.
Under Jaeger, click Create Instance.
On the Create Jaeger page, to install using the defaults, click Create to create the distributed tracing platform (Jaeger) instance.
On the Jaegers page, click the name of the distributed tracing platform (Jaeger) instance, for example, jaeger-all-in-one-inmemory
.
On the Jaeger Details page, click the Resources tab. Wait until the pod has a status of "Running" before continuing.
Follow this procedure to create an instance of distributed tracing platform (Jaeger) from the command line.
The Red Hat OpenShift distributed tracing platform (Jaeger) Operator has been installed and verified.
You have reviewed the instructions for how to customize the deployment.
You have access to the OpenShift CLI (oc
) that matches your OpenShift Container Platform version.
You have access to the cluster as a user with the cluster-admin
role.
Log in to the OpenShift Container Platform CLI as a user with the cluster-admin
role by running the following command:
$ oc login --username=<NAMEOFUSER> https://<HOSTNAME>:8443
Create a new project named tracing-system
by running the following command:
$ oc new-project tracing-system
Create a custom resource file named jaeger.yaml
that contains the following text:
apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
name: jaeger-all-in-one-inmemory
Run the following command to deploy distributed tracing platform (Jaeger):
$ oc create -n tracing-system -f jaeger.yaml
Run the following command to watch the progress of the pods during the installation process:
$ oc get pods -n tracing-system -w
After the installation process has completed, the output is similar to the following example:
NAME READY STATUS RESTARTS AGE
jaeger-all-in-one-inmemory-cdff7897b-qhfdx 2/2 Running 0 24s
The production
deployment strategy is intended for production environments that require a more scalable and highly available architecture, and where long-term storage of trace data is important.
The OpenShift Elasticsearch Operator has been installed.
The Red Hat OpenShift distributed tracing platform (Jaeger) Operator has been installed.
You have reviewed the instructions for how to customize the deployment.
You have access to the cluster as a user with the cluster-admin
role.
Log in to the OpenShift Container Platform web console as a user with the cluster-admin
role.
Create a new project, for example tracing-system
.
If you are installing as part of Service Mesh, the distributed tracing platform resources must be installed in the same namespace as the |
Navigate to Home → Projects.
Click Create Project.
Enter tracing-system
in the Name field.
Click Create.
Navigate to Operators → Installed Operators.
If necessary, select tracing-system
from the Project menu. You may have to wait a few moments for the Operators to be copied to the new project.
Click the Red Hat OpenShift distributed tracing platform (Jaeger) Operator. On the Overview tab, under Provided APIs, the Operator provides a single link.
Under Jaeger, click Create Instance.
On the Create Jaeger page, replace the default all-in-one
YAML text with your production YAML configuration, for example:
apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
name: jaeger-production
namespace:
spec:
strategy: production
ingress:
security: oauth-proxy
storage:
type: elasticsearch
elasticsearch:
nodeCount: 3
redundancyPolicy: SingleRedundancy
esIndexCleaner:
enabled: true
numberOfDays: 7
schedule: 55 23 * * *
esRollover:
schedule: '*/30 * * * *'
Click Create to create the distributed tracing platform (Jaeger) instance.
On the Jaegers page, click the name of the distributed tracing platform (Jaeger) instance, for example, jaeger-prod-elasticsearch
.
On the Jaeger Details page, click the Resources tab. Wait until all the pods have a status of "Running" before continuing.
Follow this procedure to create an instance of distributed tracing platform (Jaeger) from the command line.
The OpenShift Elasticsearch Operator has been installed.
The Red Hat OpenShift distributed tracing platform (Jaeger) Operator has been installed.
You have reviewed the instructions for how to customize the deployment.
You have access to the OpenShift CLI (oc
) that matches your OpenShift Container Platform version.
You have access to the cluster as a user with the cluster-admin
role.
Log in to the OpenShift CLI (oc
) as a user with the cluster-admin
role by running the following command:
$ oc login --username=<NAMEOFUSER> https://<HOSTNAME>:8443
Create a new project named tracing-system
by running the following command:
$ oc new-project tracing-system
Create a custom resource file named jaeger-production.yaml
that contains the text of the example file in the previous procedure.
Run the following command to deploy distributed tracing platform (Jaeger):
$ oc create -n tracing-system -f jaeger-production.yaml
Run the following command to watch the progress of the pods during the installation process:
$ oc get pods -n tracing-system -w
After the installation process has completed, you will see output similar to the following example:
NAME READY STATUS RESTARTS AGE
elasticsearch-cdm-jaegersystemjaegerproduction-1-6676cf568gwhlw 2/2 Running 0 10m
elasticsearch-cdm-jaegersystemjaegerproduction-2-bcd4c8bf5l6g6w 2/2 Running 0 10m
elasticsearch-cdm-jaegersystemjaegerproduction-3-844d6d9694hhst 2/2 Running 0 10m
jaeger-production-collector-94cd847d-jwjlj 1/1 Running 3 8m32s
jaeger-production-query-5cbfbd499d-tv8zf 3/3 Running 3 8m32s
The streaming
deployment strategy is intended for production environments that require a more scalable and highly available architecture, and where long-term storage of trace data is important.
The streaming
strategy provides a streaming capability that sits between the Collector and the Elasticsearch storage. This reduces the pressure on the storage under high load situations, and enables other trace post-processing capabilities to tap into the real-time span data directly from the Kafka streaming platform.
The streaming strategy requires an additional Red Hat subscription for AMQ Streams. If you do not have an AMQ Streams subscription, contact your sales representative for more information. |
The streaming deployment strategy is currently unsupported on IBM Z. |
The AMQ Streams Operator has been installed. If using version 1.4.0 or higher you can use self-provisioning. Otherwise you must create the Kafka instance.
The Red Hat OpenShift distributed tracing platform (Jaeger) Operator has been installed.
You have reviewed the instructions for how to customize the deployment.
You have access to the cluster as a user with the cluster-admin
role.
Log in to the OpenShift Container Platform web console as a user with the cluster-admin
role.
Create a new project, for example tracing-system
.
If you are installing as part of Service Mesh, the distributed tracing platform resources must be installed in the same namespace as the |
Navigate to Home → Projects.
Click Create Project.
Enter tracing-system
in the Name field.
Click Create.
Navigate to Operators → Installed Operators.
If necessary, select tracing-system
from the Project menu. You may have to wait a few moments for the Operators to be copied to the new project.
Click the Red Hat OpenShift distributed tracing platform (Jaeger) Operator. On the Overview tab, under Provided APIs, the Operator provides a single link.
Under Jaeger, click Create Instance.
On the Create Jaeger page, replace the default all-in-one
YAML text with your streaming YAML configuration, for example:
apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
name: jaeger-streaming
spec:
strategy: streaming
collector:
options:
kafka:
producer:
topic: jaeger-spans
brokers: my-cluster-kafka-brokers.kafka:9092 (1)
storage:
type: elasticsearch
ingester:
options:
kafka:
consumer:
topic: jaeger-spans
brokers: my-cluster-kafka-brokers.kafka:9092
1 | If the brokers are not defined, AMQStreams 1.4.0+ self-provisions Kafka. |
Click Create to create the distributed tracing platform (Jaeger) instance.
On the Jaegers page, click the name of the distributed tracing platform (Jaeger) instance, for example, jaeger-streaming
.
On the Jaeger Details page, click the Resources tab. Wait until all the pods have a status of "Running" before continuing.
Follow this procedure to create an instance of distributed tracing platform (Jaeger) from the command line.
The AMQ Streams Operator has been installed. If using version 1.4.0 or higher you can use self-provisioning. Otherwise you must create the Kafka instance.
The Red Hat OpenShift distributed tracing platform (Jaeger) Operator has been installed.
You have reviewed the instructions for how to customize the deployment.
You have access to the OpenShift CLI (oc
) that matches your OpenShift Container Platform version.
You have access to the cluster as a user with the cluster-admin
role.
Procedure
Log in to the OpenShift CLI (oc
) as a user with the cluster-admin
role by running the following command:
$ oc login --username=<NAMEOFUSER> https://<HOSTNAME>:8443
Create a new project named tracing-system
by running the following command:
$ oc new-project tracing-system
Create a custom resource file named jaeger-streaming.yaml
that contains the text of the example file in the previous procedure.
Run the following command to deploy Jaeger:
$ oc create -n tracing-system -f jaeger-streaming.yaml
Run the following command to watch the progress of the pods during the installation process:
$ oc get pods -n tracing-system -w
After the installation process has completed, you should see output similar to the following example:
NAME READY STATUS RESTARTS AGE
elasticsearch-cdm-jaegersystemjaegerstreaming-1-697b66d6fcztcnn 2/2 Running 0 5m40s
elasticsearch-cdm-jaegersystemjaegerstreaming-2-5f4b95c78b9gckz 2/2 Running 0 5m37s
elasticsearch-cdm-jaegersystemjaegerstreaming-3-7b6d964576nnz97 2/2 Running 0 5m5s
jaeger-streaming-collector-6f6db7f99f-rtcfm 1/1 Running 0 80s
jaeger-streaming-entity-operator-6b6d67cc99-4lm9q 3/3 Running 2 2m18s
jaeger-streaming-ingester-7d479847f8-5h8kc 1/1 Running 0 80s
jaeger-streaming-kafka-0 2/2 Running 0 3m1s
jaeger-streaming-query-65bf5bb854-ncnc7 3/3 Running 0 80s
jaeger-streaming-zookeeper-0 2/2 Running 0 3m39s
To access the Jaeger console you must have either Red Hat OpenShift Service Mesh or Red Hat OpenShift distributed tracing platform installed, and Red Hat OpenShift distributed tracing platform (Jaeger) installed, configured, and deployed.
The installation process creates a route to access the Jaeger console.
If you know the URL for the Jaeger console, you can access it directly. If you do not know the URL, use the following directions.
Log in to the OpenShift Container Platform web console as a user with cluster-admin rights. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin
role.
Navigate to Networking → Routes.
On the Routes page, select the control plane project, for example tracing-system
, from the Namespace menu.
The Location column displays the linked address for each route.
If necessary, use the filter to find the jaeger
route. Click the route Location to launch the console.
Click Log In With OpenShift.
Log in to the OpenShift Container Platform CLI as a user with the cluster-admin
role by running the following command. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin
role.
$ oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443
To query for details of the route using the command line, enter the following command. In this example, tracing-system
is the control plane namespace.
$ export JAEGER_URL=$(oc get route -n tracing-system jaeger -o jsonpath='{.spec.host}')
Launch a browser and navigate to https://<JAEGER_URL>
, where <JAEGER_URL>
is the route that you discovered in the previous step.
Log in using the same user name and password that you use to access the OpenShift Container Platform console.
If you have added services to the service mesh and have generated traces, you can use the filters and Find Traces button to search your trace data.
If you are validating the console installation, there is no trace data to display.
Red Hat OpenShift distributed tracing platform instance names must be unique. If you want to have multiple Red Hat OpenShift distributed tracing platform (Jaeger) instances and are using sidecar injected agents, then the Red Hat OpenShift distributed tracing platform (Jaeger) instances should have unique names, and the injection annotation should explicitly specify the Red Hat OpenShift distributed tracing platform (Jaeger) instance name the tracing data should be reported to.
If you have a multitenant implementation and tenants are separated by namespaces, deploy a Red Hat OpenShift distributed tracing platform (Jaeger) instance to each tenant namespace.
For information about configuring persistent storage, see Understanding persistent storage and the appropriate configuration topic for your chosen storage option.
The Jaeger custom resource (CR) defines the architecture and settings to be used when creating the distributed tracing platform (Jaeger) resources. You can modify these parameters to customize your distributed tracing platform (Jaeger) implementation to your business needs.
apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
name: name
spec:
strategy: <deployment_strategy>
allInOne:
options: {}
resources: {}
agent:
options: {}
resources: {}
collector:
options: {}
resources: {}
sampling:
options: {}
storage:
type:
options: {}
query:
options: {}
resources: {}
ingester:
options: {}
resources: {}
options: {}
Parameter | Description | Values | Default value |
---|---|---|---|
|
API version to use when creating the object. |
|
|
|
Defines the kind of Kubernetes object to create. |
|
|
|
Data that helps uniquely identify the object, including a |
OpenShift Container Platform automatically generates the |
|
|
Name for the object. |
The name of your distributed tracing platform (Jaeger) instance. |
|
|
Specification for the object to be created. |
Contains all of the configuration parameters for your distributed tracing platform (Jaeger) instance. When a common definition for all Jaeger components is required, it is defined under the |
N/A |
|
Jaeger deployment strategy |
|
|
|
Because the |
||
|
Configuration options that define the Agent. |
||
|
Configuration options that define the Jaeger Collector. |
||
|
Configuration options that define the sampling strategies for tracing. |
||
|
Configuration options that define the storage. All storage-related options must be placed under |
||
|
Configuration options that define the Query service. |
||
|
Configuration options that define the Ingester service. |
The following example YAML is the minimum required to create a Red Hat OpenShift distributed tracing platform (Jaeger) deployment using the default settings.
apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
name: jaeger-all-in-one-inmemory
To schedule the Jaeger and Elasticsearch pods on dedicated nodes, see How to deploy the different Jaeger components on infra nodes using nodeSelector and tolerations in OpenShift 4.
The Jaeger Collector is the component responsible for receiving the spans that were captured by the tracer and writing them to persistent Elasticsearch storage when using the production
strategy, or to AMQ Streams when using the streaming
strategy.
The Collectors are stateless and thus many instances of Jaeger Collector can be run in parallel. Collectors require almost no configuration, except for the location of the Elasticsearch cluster.
Parameter | Description | Values |
---|---|---|
collector: replicas: |
Specifies the number of Collector replicas to create. |
Integer, for example, |
Parameter | Description | Values |
---|---|---|
spec: collector: options: {} |
Configuration options that define the Jaeger Collector. |
|
options: collector: num-workers: |
The number of workers pulling from the queue. |
Integer, for example, |
options: collector: queue-size: |
The size of the Collector queue. |
Integer, for example, |
options: kafka: producer: topic: jaeger-spans |
The |
Label for the producer. |
options: kafka: producer: brokers: my-cluster-kafka-brokers.kafka:9092 |
Identifies the Kafka configuration used by the Collector to produce the messages. If brokers are not specified, and you have AMQ Streams 1.4.0+ installed, the Red Hat OpenShift distributed tracing platform (Jaeger) Operator will self-provision Kafka. |
|
options: log-level: |
Logging level for the Collector. |
Possible values: |
options: otlp: enabled: true grpc: host-port: 4317 max-connection-age: 0s max-connection-age-grace: 0s max-message-size: 4194304 tls: enabled: false cert: /path/to/cert.crt cipher-suites: "TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256" client-ca: /path/to/cert.ca reload-interval: 0s min-version: 1.2 max-version: 1.3 |
To accept OTLP/gRPC, explicitly enable the |
|
options: otlp: enabled: true http: cors: allowed-headers: [<header-name>[, <header-name>]*] allowed-origins: * host-port: 4318 max-connection-age: 0s max-connection-age-grace: 0s max-message-size: 4194304 read-timeout: 0s read-header-timeout: 2s idle-timeout: 0s tls: enabled: false cert: /path/to/cert.crt cipher-suites: "TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256" client-ca: /path/to/cert.ca reload-interval: 0s min-version: 1.2 max-version: 1.3 |
To accept OTLP/HTTP, explicitly enable the |
The Red Hat OpenShift distributed tracing platform (Jaeger) Operator can be used to define sampling strategies that will be supplied to tracers that have been configured to use a remote sampler.
While all traces are generated, only a few are sampled. Sampling a trace marks the trace for further processing and storage.
This is not relevant if a trace was started by the Envoy proxy, as the sampling decision is made there. The Jaeger sampling decision is only relevant when the trace is started by an application using the client. |
When a service receives a request that contains no trace context, the client starts a new trace, assigns it a random trace ID, and makes a sampling decision based on the currently installed sampling strategy. The sampling decision propagates to all subsequent requests in the trace so that other services are not making the sampling decision again.
distributed tracing platform (Jaeger) libraries support the following samplers:
Probabilistic - The sampler makes a random sampling decision with the probability of sampling equal to the value of the sampling.param
property. For example, using sampling.param=0.1
samples approximately 1 in 10 traces.
Rate Limiting - The sampler uses a leaky bucket rate limiter to ensure that traces are sampled with a certain constant rate. For example, using sampling.param=2.0
samples requests with the rate of 2 traces per second.
Parameter | Description | Values | Default value |
---|---|---|---|
spec: sampling: options: {} default_strategy: service_strategy: |
Configuration options that define the sampling strategies for tracing. |
If you do not provide configuration, the Collectors will return the default probabilistic sampling policy with 0.001 (0.1%) probability for all services. |
|
default_strategy: type: service_strategy: type: |
Sampling strategy to use. See descriptions above. |
Valid values are |
|
default_strategy: param: service_strategy: param: |
Parameters for the selected sampling strategy. |
Decimal and integer values (0, .1, 1, 10) |
1 |
This example defines a default sampling strategy that is probabilistic, with a 50% chance of the trace instances being sampled.
apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
name: with-sampling
spec:
sampling:
options:
default_strategy:
type: probabilistic
param: 0.5
service_strategies:
- service: alpha
type: probabilistic
param: 0.8
operation_strategies:
- operation: op1
type: probabilistic
param: 0.2
- operation: op2
type: probabilistic
param: 0.4
- service: beta
type: ratelimiting
param: 5
If there are no user-supplied configurations, the distributed tracing platform (Jaeger) uses the following settings:
spec:
sampling:
options:
default_strategy:
type: probabilistic
param: 1
You configure storage for the Collector, Ingester, and Query services under spec.storage
. Multiple instances of each of these components can be provisioned as required for performance and resilience purposes.
Parameter | Description | Values | Default value |
---|---|---|---|
spec: storage: type: |
Type of storage to use for the deployment. |
|
|
storage: secretname: |
Name of the secret, for example |
N/A |
|
storage: options: {} |
Configuration options that define the storage. |
Parameter | Description | Values | Default value |
---|---|---|---|
storage: esIndexCleaner: enabled: |
When using Elasticsearch storage, by default a job is created to clean old traces from the index. This parameter enables or disables the index cleaner job. |
|
|
storage: esIndexCleaner: numberOfDays: |
Number of days to wait before deleting an index. |
Integer value |
|
storage: esIndexCleaner: schedule: |
Defines the schedule for how often to clean the Elasticsearch index. |
Cron expression |
"55 23 * * *" |
When you deploy a Jaeger custom resource, the Red Hat OpenShift distributed tracing platform (Jaeger) Operator uses the OpenShift Elasticsearch Operator to create an Elasticsearch cluster based on the configuration provided in the storage
section of the custom resource file. The Red Hat OpenShift distributed tracing platform (Jaeger) Operator will provision Elasticsearch if the following configurations are set:
spec.storage:type
is set to elasticsearch
spec.storage.elasticsearch.doNotProvision
set to false
spec.storage.options.es.server-urls
is not defined, that is, there is no connection to an Elasticsearch instance that was not provisioned by the OpenShift Elasticsearch Operator.
When provisioning Elasticsearch, the Red Hat OpenShift distributed tracing platform (Jaeger) Operator sets the Elasticsearch custom resource name
to the value of spec.storage.elasticsearch.name
from the Jaeger custom resource. If you do not specify a value for spec.storage.elasticsearch.name
, the Operator uses elasticsearch
.
You can have only one distributed tracing platform (Jaeger) with self-provisioned Elasticsearch instance per namespace. The Elasticsearch cluster is meant to be dedicated for a single distributed tracing platform (Jaeger) instance.
There can be only one Elasticsearch per namespace.
If you already have installed Elasticsearch as part of OpenShift Logging, the Red Hat OpenShift distributed tracing platform (Jaeger) Operator can use the installed OpenShift Elasticsearch Operator to provision storage. |
The following configuration parameters are for a self-provisioned Elasticsearch instance, that is an instance created by the Red Hat OpenShift distributed tracing platform (Jaeger) Operator using the OpenShift Elasticsearch Operator. You specify configuration options for self-provisioned Elasticsearch under spec:storage:elasticsearch
in your configuration file.
Parameter | Description | Values | Default value |
---|---|---|---|
elasticsearch: properties: doNotProvision: |
Use to specify whether or not an Elasticsearch instance should be provisioned by the Red Hat OpenShift distributed tracing platform (Jaeger) Operator. |
|
|
elasticsearch: properties: name: |
Name of the Elasticsearch instance. The Red Hat OpenShift distributed tracing platform (Jaeger) Operator uses the Elasticsearch instance specified in this parameter to connect to Elasticsearch. |
string |
|
elasticsearch: nodeCount: |
Number of Elasticsearch nodes. For high availability use at least 3 nodes. Do not use 2 nodes as “split brain” problem can happen. |
Integer value. For example, Proof of concept = 1, Minimum deployment =3 |
3 |
elasticsearch: resources: requests: cpu: |
Number of central processing units for requests, based on your environment’s configuration. |
Specified in cores or millicores, for example, 200m, 0.5, 1. For example, Proof of concept = 500m, Minimum deployment =1 |
1 |
elasticsearch: resources: requests: memory: |
Available memory for requests, based on your environment’s configuration. |
Specified in bytes, for example, 200Ki, 50Mi, 5Gi. For example, Proof of concept = 1Gi, Minimum deployment = 16Gi* |
16Gi |
elasticsearch: resources: limits: cpu: |
Limit on number of central processing units, based on your environment’s configuration. |
Specified in cores or millicores, for example, 200m, 0.5, 1. For example, Proof of concept = 500m, Minimum deployment =1 |
|
elasticsearch: resources: limits: memory: |
Available memory limit based on your environment’s configuration. |
Specified in bytes, for example, 200Ki, 50Mi, 5Gi. For example, Proof of concept = 1Gi, Minimum deployment = 16Gi* |
|
elasticsearch: redundancyPolicy: |
Data replication policy defines how Elasticsearch shards are replicated across data nodes in the cluster. If not specified, the Red Hat OpenShift distributed tracing platform (Jaeger) Operator automatically determines the most appropriate replication based on number of nodes. |
|
|
elasticsearch: useCertManagement: |
Use to specify whether or not distributed tracing platform (Jaeger) should use the certificate management feature of the OpenShift Elasticsearch Operator. This feature was added to {logging-title} 5.2 in OpenShift Container Platform 4.7 and is the preferred setting for new Jaeger deployments. |
|
|
Each Elasticsearch node can operate with a lower memory setting though this is NOT recommended for production deployments. For production use, you must have no less than 16 Gi allocated to each pod by default, but preferably allocate as much as you can, up to 64 Gi per pod.
apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
name: simple-prod
spec:
strategy: production
storage:
type: elasticsearch
elasticsearch:
nodeCount: 3
resources:
requests:
cpu: 1
memory: 16Gi
limits:
memory: 16Gi
apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
name: simple-prod
spec:
strategy: production
storage:
type: elasticsearch
elasticsearch:
nodeCount: 1
storage: (1)
storageClassName: gp2
size: 5Gi
resources:
requests:
cpu: 200m
memory: 4Gi
limits:
memory: 4Gi
redundancyPolicy: ZeroRedundancy
1 | Persistent storage configuration. In this case AWS gp2 with 5Gi size. When no value is specified, distributed tracing platform (Jaeger) uses emptyDir . The OpenShift Elasticsearch Operator provisions PersistentVolumeClaim and PersistentVolume which are not removed with distributed tracing platform (Jaeger) instance. You can mount the same volumes if you create a distributed tracing platform (Jaeger) instance with the same name and namespace. |
You can use an existing Elasticsearch cluster for storage with distributed tracing platform. An existing Elasticsearch cluster, also known as an external Elasticsearch instance, is an instance that was not installed by the Red Hat OpenShift distributed tracing platform (Jaeger) Operator or by the OpenShift Elasticsearch Operator.
When you deploy a Jaeger custom resource, the Red Hat OpenShift distributed tracing platform (Jaeger) Operator will not provision Elasticsearch if the following configurations are set:
spec.storage.elasticsearch.doNotProvision
set to true
spec.storage.options.es.server-urls
has a value
spec.storage.elasticsearch.name
has a value, or if the Elasticsearch instance name is elasticsearch
.
The Red Hat OpenShift distributed tracing platform (Jaeger) Operator uses the Elasticsearch instance specified in spec.storage.elasticsearch.name
to connect to Elasticsearch.
You cannot share or reuse a OpenShift Container Platform logging Elasticsearch instance with distributed tracing platform (Jaeger). The Elasticsearch cluster is meant to be dedicated for a single distributed tracing platform (Jaeger) instance.
The following configuration parameters are for an already existing Elasticsearch instance, also known as an external Elasticsearch instance. In this case, you specify configuration options for Elasticsearch under spec:storage:options:es
in your custom resource file.
Parameter | Description | Values | Default value |
---|---|---|---|
es: server-urls: |
URL of the Elasticsearch instance. |
The fully-qualified domain name of the Elasticsearch server. |
|
es: max-doc-count: |
The maximum document count to return from an Elasticsearch query. This will also apply to aggregations. If you set both |
10000 |
|
es: max-num-spans: |
[Deprecated - Will be removed in a future release, use |
10000 |
|
es: max-span-age: |
The maximum lookback for spans in Elasticsearch. |
72h0m0s |
|
es: sniffer: |
The sniffer configuration for Elasticsearch. The client uses the sniffing process to find all nodes automatically. Disabled by default. |
|
|
es: sniffer-tls-enabled: |
Option to enable TLS when sniffing an Elasticsearch Cluster. The client uses the sniffing process to find all nodes automatically. Disabled by default |
|
|
es: timeout: |
Timeout used for queries. When set to zero there is no timeout. |
0s |
|
es: username: |
The username required by Elasticsearch. The basic authentication also loads CA if it is specified. See also |
||
es: password: |
The password required by Elasticsearch. See also, |
||
es: version: |
The major Elasticsearch version. If not specified, the value will be auto-detected from Elasticsearch. |
0 |
Parameter | Description | Values | Default value |
---|---|---|---|
es: num-replicas: |
The number of replicas per index in Elasticsearch. |
1 |
|
es: num-shards: |
The number of shards per index in Elasticsearch. |
5 |
Parameter | Description | Values | Default value |
---|---|---|---|
es: create-index-templates: |
Automatically create index templates at application startup when set to |
|
|
es: index-prefix: |
Optional prefix for distributed tracing platform (Jaeger) indices. For example, setting this to "production" creates indices named "production-tracing-*". |
Parameter | Description | Values | Default value |
---|---|---|---|
es: bulk: actions: |
The number of requests that can be added to the queue before the bulk processor decides to commit updates to disk. |
1000 |
|
es: bulk: flush-interval: |
A |
200ms |
|
es: bulk: size: |
The number of bytes that the bulk requests can take up before the bulk processor decides to commit updates to disk. |
5000000 |
|
es: bulk: workers: |
The number of workers that are able to receive and commit bulk requests to Elasticsearch. |
1 |
Parameter | Description | Values | Default value |
---|---|---|---|
es: tls: ca: |
Path to a TLS Certification Authority (CA) file used to verify the remote servers. |
Will use the system truststore by default. |
|
es: tls: cert: |
Path to a TLS Certificate file, used to identify this process to the remote servers. |
||
es: tls: enabled: |
Enable transport layer security (TLS) when talking to the remote servers. Disabled by default. |
|
|
es: tls: key: |
Path to a TLS Private Key file, used to identify this process to the remote servers. |
||
es: tls: server-name: |
Override the expected TLS server name in the certificate of the remote servers. |
||
es: token-file: |
Path to a file containing the bearer token. This flag also loads the Certification Authority (CA) file if it is specified. |
Parameter | Description | Values | Default value |
---|---|---|---|
es-archive: bulk: actions: |
The number of requests that can be added to the queue before the bulk processor decides to commit updates to disk. |
0 |
|
es-archive: bulk: flush-interval: |
A |
0s |
|
es-archive: bulk: size: |
The number of bytes that the bulk requests can take up before the bulk processor decides to commit updates to disk. |
0 |
|
es-archive: bulk: workers: |
The number of workers that are able to receive and commit bulk requests to Elasticsearch. |
0 |
|
es-archive: create-index-templates: |
Automatically create index templates at application startup when set to |
|
|
es-archive: enabled: |
Enable extra storage. |
|
|
es-archive: index-prefix: |
Optional prefix for distributed tracing platform (Jaeger) indices. For example, setting this to "production" creates indices named "production-tracing-*". |
||
es-archive: max-doc-count: |
The maximum document count to return from an Elasticsearch query. This will also apply to aggregations. |
0 |
|
es-archive: max-num-spans: |
[Deprecated - Will be removed in a future release, use |
0 |
|
es-archive: max-span-age: |
The maximum lookback for spans in Elasticsearch. |
0s |
|
es-archive: num-replicas: |
The number of replicas per index in Elasticsearch. |
0 |
|
es-archive: num-shards: |
The number of shards per index in Elasticsearch. |
0 |
|
es-archive: password: |
The password required by Elasticsearch. See also, |
||
es-archive: server-urls: |
The comma-separated list of Elasticsearch servers. Must be specified as fully qualified URLs, for example, |
||
es-archive: sniffer: |
The sniffer configuration for Elasticsearch. The client uses the sniffing process to find all nodes automatically. Disabled by default. |
|
|
es-archive: sniffer-tls-enabled: |
Option to enable TLS when sniffing an Elasticsearch Cluster. The client uses the sniffing process to find all nodes automatically. Disabled by default. |
|
|
es-archive: timeout: |
Timeout used for queries. When set to zero there is no timeout. |
0s |
|
es-archive: tls: ca: |
Path to a TLS Certification Authority (CA) file used to verify the remote servers. |
Will use the system truststore by default. |
|
es-archive: tls: cert: |
Path to a TLS Certificate file, used to identify this process to the remote servers. |
||
es-archive: tls: enabled: |
Enable transport layer security (TLS) when talking to the remote servers. Disabled by default. |
|
|
es-archive: tls: key: |
Path to a TLS Private Key file, used to identify this process to the remote servers. |
||
es-archive: tls: server-name: |
Override the expected TLS server name in the certificate of the remote servers. |
||
es-archive: token-file: |
Path to a file containing the bearer token. This flag also loads the Certification Authority (CA) file if it is specified. |
||
es-archive: username: |
The username required by Elasticsearch. The basic authentication also loads CA if it is specified. See also |
||
es-archive: version: |
The major Elasticsearch version. If not specified, the value will be auto-detected from Elasticsearch. |
0 |
apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
name: simple-prod
spec:
strategy: production
storage:
type: elasticsearch
options:
es:
server-urls: https://quickstart-es-http.default.svc:9200
index-prefix: my-prefix
tls:
ca: /es/certificates/ca.crt
secretName: tracing-secret
volumeMounts:
- name: certificates
mountPath: /es/certificates/
readOnly: true
volumes:
- name: certificates
secret:
secretName: quickstart-es-http-certs-public
The following example shows a Jaeger CR using an external Elasticsearch cluster with TLS CA certificate mounted from a volume and user/password stored in a secret.
apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
name: simple-prod
spec:
strategy: production
storage:
type: elasticsearch
options:
es:
server-urls: https://quickstart-es-http.default.svc:9200 (1)
index-prefix: my-prefix
tls: (2)
ca: /es/certificates/ca.crt
secretName: tracing-secret (3)
volumeMounts: (4)
- name: certificates
mountPath: /es/certificates/
readOnly: true
volumes:
- name: certificates
secret:
secretName: quickstart-es-http-certs-public
1 | URL to Elasticsearch service running in default namespace. |
2 | TLS configuration. In this case only CA certificate, but it can also contain es.tls.key and es.tls.cert when using mutual TLS. |
3 | Secret which defines environment variables ES_PASSWORD and ES_USERNAME. Created by kubectl create secret generic tracing-secret --from-literal=ES_PASSWORD=changeme --from-literal=ES_USERNAME=elastic |
4 | Volume mounts and volumes which are mounted into all storage components. |
You can create and manage certificates using the OpenShift Elasticsearch Operator. Managing certificates using the OpenShift Elasticsearch Operator also lets you use a single Elasticsearch cluster with multiple Jaeger Collectors.
Managing certificates with Elasticsearch is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
Starting with version 2.4, the Red Hat OpenShift distributed tracing platform (Jaeger) Operator delegates certificate creation to the OpenShift Elasticsearch Operator by using the following annotations in the Elasticsearch custom resource:
logging.openshift.io/elasticsearch-cert-management: "true"
logging.openshift.io/elasticsearch-cert.jaeger-<shared-es-node-name>: "user.jaeger"
logging.openshift.io/elasticsearch-cert.curator-<shared-es-node-name>: "system.logging.curator"
Where the <shared-es-node-name>
is the name of the Elasticsearch node. For example, if you create an Elasticsearch node named custom-es
, your custom resource might look like the following example.
apiVersion: logging.openshift.io/v1
kind: Elasticsearch
metadata:
annotations:
logging.openshift.io/elasticsearch-cert-management: "true"
logging.openshift.io/elasticsearch-cert.jaeger-custom-es: "user.jaeger"
logging.openshift.io/elasticsearch-cert.curator-custom-es: "system.logging.curator"
name: custom-es
spec:
managementState: Managed
nodeSpec:
resources:
limits:
memory: 16Gi
requests:
cpu: 1
memory: 16Gi
nodes:
- nodeCount: 3
proxyResources: {}
resources: {}
roles:
- master
- client
- data
storage: {}
redundancyPolicy: ZeroRedundancy
The Red Hat OpenShift Service Mesh Operator is installed.
The {logging-title} is installed with default configuration in your cluster.
The Elasticsearch node and the Jaeger instances must be deployed in the same namespace. For example, tracing-system
.
You enable certificate management by setting spec.storage.elasticsearch.useCertManagement
to true
in the Jaeger custom resource.
useCertManagement
apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
name: jaeger-prod
spec:
strategy: production
storage:
type: elasticsearch
elasticsearch:
name: custom-es
doNotProvision: true
useCertManagement: true
The Red Hat OpenShift distributed tracing platform (Jaeger) Operator sets the Elasticsearch custom resource name
to the value of spec.storage.elasticsearch.name
from the Jaeger custom resource when provisioning Elasticsearch.
The certificates are provisioned by the OpenShift Elasticsearch Operator and the Red Hat OpenShift distributed tracing platform (Jaeger) Operator injects the certificates.
Query is a service that retrieves traces from storage and hosts the user interface to display them.
Parameter | Description | Values | Default value |
---|---|---|---|
spec: query: replicas: |
Specifies the number of Query replicas to create. |
Integer, for example, |
Parameter | Description | Values | Default value |
---|---|---|---|
spec: query: options: {} |
Configuration options that define the Query service. |
||
options: log-level: |
Logging level for Query. |
Possible values: |
|
options: query: base-path: |
The base path for all jaeger-query HTTP routes can be set to a non-root value, for example, |
/<path> |
apiVersion: jaegertracing.io/v1
kind: "Jaeger"
metadata:
name: "my-jaeger"
spec:
strategy: allInOne
allInOne:
options:
log-level: debug
query:
base-path: /jaeger
Ingester is a service that reads from a Kafka topic and writes to the Elasticsearch storage backend. If you are using the allInOne
or production
deployment strategies, you do not need to configure the Ingester service.
Parameter | Description | Values |
---|---|---|
spec: ingester: options: {} |
Configuration options that define the Ingester service. |
|
options: deadlockInterval: |
Specifies the interval, in seconds or minutes, that the Ingester must wait for a message before terminating.
The deadlock interval is disabled by default (set to |
Minutes and seconds, for example, |
options: kafka: consumer: topic: |
The |
Label for the consumer. For example, |
options: kafka: consumer: brokers: |
Identifies the Kafka configuration used by the Ingester to consume the messages. |
Label for the broker, for example, |
options: log-level: |
Logging level for the Ingester. |
Possible values: |
apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
name: simple-streaming
spec:
strategy: streaming
collector:
options:
kafka:
producer:
topic: jaeger-spans
brokers: my-cluster-kafka-brokers.kafka:9092
ingester:
options:
kafka:
consumer:
topic: jaeger-spans
brokers: my-cluster-kafka-brokers.kafka:9092
ingester:
deadlockInterval: 5
storage:
type: elasticsearch
options:
es:
server-urls: http://elasticsearch:9200
The Red Hat OpenShift distributed tracing platform (Jaeger) relies on a proxy sidecar within the application’s pod to provide the Agent. The Red Hat OpenShift distributed tracing platform (Jaeger) Operator can inject Agent sidecars into deployment workloads. You can enable automatic sidecar injection or manage it manually.
The Red Hat OpenShift distributed tracing platform (Jaeger) Operator can inject Jaeger Agent sidecars into deployment workloads. To enable automatic injection of sidecars, add the sidecar.jaegertracing.io/inject
annotation set to either the string true
or to the distributed tracing platform (Jaeger) instance name that is returned by running $ oc get jaegers
.
When you specify true
, there must be only a single distributed tracing platform (Jaeger) instance for the same namespace as the deployment. Otherwise, the Operator is unable to determine which distributed tracing platform (Jaeger) instance to use. A specific distributed tracing platform (Jaeger) instance name on a deployment has a higher precedence than true
applied on its namespace.
The following snippet shows a simple application that will inject a sidecar, with the agent pointing to the single distributed tracing platform (Jaeger) instance available in the same namespace:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
annotations:
"sidecar.jaegertracing.io/inject": "true" (1)
spec:
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: acme/myapp:myversion
1 | Set to either the string true or to the Jaeger instance name. |
When the sidecar is injected, the agent can then be accessed at its default location on localhost
.
The Red Hat OpenShift distributed tracing platform (Jaeger) Operator can only automatically inject Jaeger Agent sidecars into Deployment workloads. For controller types other than Deployments
, such as StatefulSets`and `DaemonSets
, you can manually define the Jaeger agent sidecar in your specification.
The following snippet shows the manual definition you can include in your containers section for a Jaeger agent sidecar:
StatefulSet
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: example-statefulset
namespace: example-ns
labels:
app: example-app
spec:
spec:
containers:
- name: example-app
image: acme/myapp:myversion
ports:
- containerPort: 8080
protocol: TCP
- name: jaeger-agent
image: registry.redhat.io/distributed-tracing/jaeger-agent-rhel7:<version>
# The agent version must match the Operator version
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5775
name: zk-compact-trft
protocol: UDP
- containerPort: 5778
name: config-rest
protocol: TCP
- containerPort: 6831
name: jg-compact-trft
protocol: UDP
- containerPort: 6832
name: jg-binary-trft
protocol: UDP
- containerPort: 14271
name: admin-http
protocol: TCP
args:
- --reporter.grpc.host-port=dns:///jaeger-collector-headless.example-ns:14250
- --reporter.type=grpc
The agent can then be accessed at its default location on localhost.