$ oc login --username=<your_username>
The Red Hat OpenShift distributed tracing data collection Operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
Installing the distributed tracing data collection involves the following steps:
Installing the Red Hat OpenShift distributed tracing data collection Operator.
Creating a namespace for an OpenTelemetry Collector instance.
Creating an OpenTelemetryCollector
custom resource to deploy the OpenTelemetry Collector instance.
You can install the distributed tracing data collection from the Administrator view of the web console.
You are logged in to the web console as a cluster administrator with the cluster-admin
role.
For Red Hat OpenShift Dedicated, you must be logged in using an account with the dedicated-admin
role.
An active OpenShift CLI (oc
) session by a cluster administrator with the cluster-admin
role.
|
Install the Red Hat OpenShift distributed tracing data collection Operator:
Go to Operators → OperatorHub and search for Red Hat OpenShift distributed tracing data collection Operator
.
Select the Red Hat OpenShift distributed tracing data collection Operator that is provided by Red Hat → Install → Install → View Operator.
This installs the Operator with the default presets:
|
In the Details tab of the installed Operator page, under ClusterServiceVersion details, verify that the installation Status is Succeeded.
Create a project of your choice for the OpenTelemetry Collector instance that you will create in the next step by going to Home → Projects → Create Project.
Create an OpenTelemetry Collector instance.
Go to Operators → Installed Operators.
Select OpenTelemetry Collector → Create OpenTelemetryCollector → YAML view.
In the YAML view, customize the OpenTelemetryCollector
custom resource (CR) with the OTLP, Jaeger, Zipkin receiver, and logging exporter.
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
name: otel
namespace: <project_of_opentelemetry_collector_instance>
spec:
mode: deployment
config: |
receivers:
otlp:
protocols:
grpc:
http:
jaeger:
protocols:
grpc:
thrift_binary:
thrift_compact:
thrift_http:
zipkin:
processors:
batch:
memory_limiter:
check_interval: 1s
limit_percentage: 50
spike_limit_percentage: 30
exporters:
logging:
service:
pipelines:
traces:
receivers: [otlp,jaeger,zipkin]
processors: [memory_limiter,batch]
exporters: [logging]
Select Create.
Verify that the status.phase
of the OpenTelemetry Collector pod is Running
and the conditions
are type: Ready
by running the following command:
$ oc get pod -l app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/instance=<namespace>.<instance_name> -o yaml
Get the OpenTelemetry Collector service by running the following command:
$ oc get service -l app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/instance=<namespace>.<instance_name>