apiVersion: maistra.io/v2
kind: ServiceMeshControlPlane
metadata:
name: basic
spec:
version: v2.3.1
tracing:
sampling: 100
type: Jaeger
When the Service Mesh Operator deploys the ServiceMeshControlPlane
resource, it can also create the resources for distributed tracing. Service Mesh uses Jaeger for distributed tracing.
You enable distributed tracing by specifying a tracing type and a sampling rate in the ServiceMeshControlPlane
resource.
all-in-one
Jaeger parametersapiVersion: maistra.io/v2
kind: ServiceMeshControlPlane
metadata:
name: basic
spec:
version: v2.3.1
tracing:
sampling: 100
type: Jaeger
Currently, the only tracing type that is supported is Jaeger
.
Jaeger is enabled by default. To disable tracing, set type
to None
.
The sampling rate determines how often the Envoy proxy generates a trace. You can use the sampling rate option to control what percentage of requests get reported to your tracing system. You can configure this setting based upon your traffic in the mesh and the amount of tracing data you want to collect. You configure sampling
as a scaled integer representing 0.01% increments. For example, setting the value to 10
samples 0.1% of traces, setting the value to 500
samples 5% of traces, and a setting of 10000
samples 100% of traces.
The SMCP sampling configuration option controls the Envoy sampling rate. You configure the Jaeger trace sampling rate in the Jaeger custom resource. |
You configure Jaeger under the addons
section of the ServiceMeshControlPlane
resource. However, there are some limitations to what you can configure in the SMCP.
When the SMCP passes configuration information to the Red Hat OpenShift distributed tracing platform Operator, it triggers one of three deployment strategies: allInOne
, production
, or streaming
.
The distributed tracing platform has predefined deployment strategies. You specify a deployment strategy in the Jaeger custom resource (CR) file. When you create an instance of the distributed tracing platform, the Red Hat OpenShift distributed tracing platform Operator uses this configuration file to create the objects necessary for the deployment.
The Red Hat OpenShift distributed tracing platform Operator currently supports the following deployment strategies:
allInOne (default) - This strategy is intended for development, testing, and demo purposes and it is not for production use. The main back-end components, Agent, Collector, and Query service, are all packaged into a single executable, which is configured (by default) to use in-memory storage. You can configure this deployment strategy in the SMCP.
In-memory storage is not persistent, which means that if the Jaeger instance shuts down, restarts, or is replaced, your trace data will be lost. And in-memory storage cannot be scaled, since each pod has its own memory. For persistent storage, you must use the |
production - The production strategy is intended for production environments, where long term storage of trace data is important, and a more scalable and highly available architecture is required. Each back-end component is therefore deployed separately. The Agent can be injected as a sidecar on the instrumented application. The Query and Collector services are configured with a supported storage type, which is currently Elasticsearch. Multiple instances of each of these components can be provisioned as required for performance and resilience purposes. You can configure this deployment strategy in the SMCP, but in order to be fully customized, you must specify your configuration in the Jaeger CR and link that to the SMCP.
streaming - The streaming strategy is designed to augment the production strategy by providing a streaming capability that sits between the Collector and the Elasticsearch back-end storage. This provides the benefit of reducing the pressure on the back-end storage, under high load situations, and enables other trace post-processing capabilities to tap into the real-time span data directly from the streaming platform (AMQ Streams/ Kafka). You cannot configure this deployment strategy in the SMCP; you must configure a Jaeger CR and link that to the SMCP.
The streaming strategy requires an additional Red Hat subscription for AMQ Streams. |
If you do not specify Jaeger configuration options, the ServiceMeshControlPlane
resource will use the allInOne
Jaeger deployment strategy by default. When using the default allInOne
deployment strategy, set spec.addons.jaeger.install.storage.type
to Memory
. You can accept the defaults or specify additional configuration options under install
.
apiVersion: maistra.io/v2
kind: ServiceMeshControlPlane
metadata:
name: basic
spec:
version: v2.3.1
tracing:
sampling: 10000
type: Jaeger
addons:
jaeger:
name: jaeger
install:
storage:
type: Memory
To use the default settings for the production
deployment strategy, set spec.addons.jaeger.install.storage.type
to Elasticsearch
and specify additional configuration options under install
. Note that the SMCP only supports configuring Elasticsearch resources and image name.
apiVersion: maistra.io/v2
kind: ServiceMeshControlPlane
metadata:
name: basic
spec:
version: v2.3.1
tracing:
sampling: 10000
type: Jaeger
addons:
jaeger:
name: jaeger #name of Jaeger CR
install:
storage:
type: Elasticsearch
ingress:
enabled: true
runtime:
components:
tracing.jaeger.elasticsearch: # only supports resources and image name
container:
resources: {}
The SMCP supports only minimal Elasticsearch parameters. To fully customize your production environment and access all of the Elasticsearch configuration parameters, use the Jaeger custom resource (CR) to configure Jaeger.
Create and configure your Jaeger instance and set spec.addons.jaeger.name
to the name of the Jaeger instance, in this example: MyJaegerInstance
.
apiVersion: maistra.io/v2
kind: ServiceMeshControlPlane
metadata:
name: basic
spec:
version: v2.3.1
tracing:
sampling: 1000
type: Jaeger
addons:
jaeger:
name: MyJaegerInstance #name of Jaeger CR
install:
storage:
type: Elasticsearch
ingress:
enabled: true
To use the streaming
deployment strategy, you create and configure your Jaeger instance first, then set spec.addons.jaeger.name
to the name of the Jaeger instance, in this example: MyJaegerInstance
.
apiVersion: maistra.io/v2
kind: ServiceMeshControlPlane
metadata:
name: basic
spec:
version: v2.3.1
tracing:
sampling: 1000
type: Jaeger
addons:
jaeger:
name: MyJaegerInstance #name of Jaeger CR
You can fully customize your Jaeger deployment by configuring Jaeger in the Jaeger custom resource (CR) rather than in the ServiceMeshControlPlane
(SMCP) resource. This configuration is sometimes referred to as an "external Jaeger" since the configuration is specified outside of the SMCP.
You must deploy the SMCP and Jaeger CR in the same namespace. For example, |
You can configure and deploy a standalone Jaeger instance and then specify the name
of the Jaeger resource as the value for spec.addons.jaeger.name
in the SMCP resource. If a Jaeger CR matching the value of name
exists, the Service Mesh control plane will use the existing installation. This approach lets you fully customize your Jaeger configuration.
Red Hat OpenShift distributed tracing instance names must be unique. If you want to have multiple Red Hat OpenShift distributed tracing platform instances and are using sidecar injected agents, then the Red Hat OpenShift distributed tracing platform instances should have unique names, and the injection annotation should explicitly specify the Red Hat OpenShift distributed tracing platform instance name the tracing data should be reported to.
If you have a multitenant implementation and tenants are separated by namespaces, deploy a Red Hat OpenShift distributed tracing platform instance to each tenant namespace.
Agent as a daemonset is not supported for multitenant installations or Red Hat OpenShift Dedicated. Agent as a sidecar is the only supported configuration for these use cases.
If you are installing distributed tracing as part of Red Hat OpenShift Service Mesh, the distributed tracing resources must be installed in the same namespace as the ServiceMeshControlPlane
resource.
For information about configuring persistent storage, see Understanding persistent storage and the appropriate configuration topic for your chosen storage option.
The distributed tracing platform uses OAuth for default authentication. However Red Hat OpenShift Service Mesh uses a secret called htpasswd
to facilitate communication between dependent services such as Grafana, Kiali, and the distributed tracing platform. When you configure your distributed tracing platform in the ServiceMeshControlPlane
the Service Mesh automatically configures security settings to use htpasswd
.
If you are specifying your distributed tracing platform configuration in a Jaeger custom resource, you must manually configure the htpasswd
settings and ensure the htpasswd
secret is mounted into your Jaeger instance so that Kiali can communicate with it.
You can modify the Jaeger resource to configure distributed tracing platform security for use with Service Mesh in the OpenShift console.
You have access to the cluster as a user with the cluster-admin
role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin
role.
The Red Hat OpenShift Service Mesh Operator must be installed.
The ServiceMeshControlPlane
deployed to the cluster.
You have access to the OpenShift Container Platform web console.
Log in to the OpenShift Container Platform web console as a user with the cluster-admin
role.
Navigate to Operators → Installed Operators.
Click the Project menu and select the project where your ServiceMeshControlPlane
resource is deployed from the list, for example istio-system
.
Click the Red Hat OpenShift distributed tracing platform Operator.
On the Operator Details page, click the Jaeger tab.
Click the name of your Jaeger instance.
On the Jaeger details page, click the YAML
tab to modify your configuration.
Edit the Jaeger
custom resource file to add the htpasswd
configuration as shown in the following example.
spec.ingress.openshift.htpasswdFile
spec.volumes
spec.volumeMounts
htpasswd
configurationapiVersion: jaegertracing.io/v1
kind: Jaeger
spec:
ingress:
enabled: true
openshift:
htpasswdFile: /etc/proxy/htpasswd/auth
sar: '{"namespace": "istio-system", "resource": "pods", "verb": "get"}'
options: {}
resources: {}
security: oauth-proxy
volumes:
- name: secret-htpasswd
secret:
secretName: htpasswd
- configMap:
defaultMode: 420
items:
- key: ca-bundle.crt
path: tls-ca-bundle.pem
name: trusted-ca-bundle
optional: true
name: trusted-ca-bundle
volumeMounts:
- mountPath: /etc/proxy/htpasswd
name: secret-htpasswd
- mountPath: /etc/pki/ca-trust/extracted/pem/
name: trusted-ca-bundle
readOnly: true
Click Save.
You can modify the Jaeger resource to configure distributed tracing platform security for use with Service Mesh from the command line using the oc
utility.
You have access to the cluster as a user with the cluster-admin
role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin
role.
The Red Hat OpenShift Service Mesh Operator must be installed.
The ServiceMeshControlPlane
deployed to the cluster.
You have access to the OpenShift CLI (oc) that matches your OpenShift Container Platform version.
Log in to the OpenShift Container Platform CLI as a user with the cluster-admin
role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin
role.