You can view your application’s topology, health, and metrics in the Kiali console. If your service is experiencing problems, the Kiali console allows you to view the data flow through your service. You can view insights about the mesh components at different levels, including abstract applications, services, and workloads. It also provides an interactive graph view of your namespace in real time.

You can observe the data flow through your application if you have an application installed. If you do not have your own application installed, you can see how observability works in Red Hat OpenShift Service Mesh by installing the Bookinfo sample application.

Accessing metrics and tracing data from the CLI

Access the Jaeger, Prometheus, and Grafana consoles to view and manage your data.

Procedure
  1. Switch to the control plane project. In this example, istio-system is the control plane project. Run the following command:

    $ oc project istio-system
  2. Get the routes to Red Hat OpenShift Service Mesh components. Run the folowing command:

    $ oc get routes

    This command returns URLs for the web consoles of Kiali, Jaeger, Prometheus, and Grafana, and any other routes in your service mesh.

  3. Copy the URL for the component you want from the HOST/PORT column into a browser to open the console.

Viewing service mesh data

The Kiali operator works with the telemetry data gathered in Red Hat OpenShift Service Mesh to provide graphs and real-time network diagrams of the applications, services, and workloads in your namespace.

To access the Kiali console you must have Red Hat OpenShift Service Mesh installed and projects configured for the service mesh.

Procedure
  1. Use the perspective switcher to switch to the Administrator perspective.

  2. Click Home > Projects.

  3. Click the name of your project. For example click bookinfo.

  4. In the Launcher section, click Kiali.

  5. Log in to the Kiali console with the same user name and password that you use to access the OpenShift Container Platform console.

When you first log in to the Kiali Console, you see the Overview page which displays all the namespaces in your mesh that you have permission to view.

Working with data in the Kiali console

From the Graph menu in the Kiali console, you can use the following graphs and viewing tools to gain deeper insights about data that travels through your service mesh. These tools can help you identify problems with services or workloads.

There are several graphs to choose from:

  • The App graph shows an aggregate workload for all applications that are labeled the same.

  • The Versioned App graph shows a node for each version of an application. All versions of an application are grouped together.

  • The Workload graph shows a node for each workload in your service mesh. This graph does not require you to use the application and version labels. If your application does not use version labels, use this the graph.

  • The Service graph shows a node for each service in your mesh but excludes all applications and workloads from the graph. It provides a high level view and aggregates all traffic for defined services.

To view a summary of metrics, select any node or edge in the graph to display its metric details in the summary details panel.

Namespace graphs

The namespace graph is a map of the services, deployments, and workflows in your namespace and arrows that show how data flows through them.

Prerequisite
  • Install the Bookinfo sample application.

Procedure
  1. Send traffic to the mesh by entering the following command several times.

    $ curl "http://$GATEWAY_URL/productpage"

    This command simulates a user visiting the productpage microservice of the application.

  2. In the main navigation, click Graph to view a namespace graph.

  3. Select bookinfo from the Namespace menu.

Distributed tracing

Distributed Tracing is the process of tracking the performance of individual services in an application by tracing the path of the service calls in the application. Each time a user takes action in an application, a request is executed that might require many services to interact to produce a response. The path of this request is called a distributed transaction.

Red Hat OpenShift Service Mesh uses Jaeger to allow developers to view call flows in a microservice application.

Generating example traces and analyzing trace data

Jaeger is an open source distributed tracing system. With Jaeger, you can perform a trace that follows the path of a request through various microservices which make up an application. Jaeger is installed by default as part of the Service Mesh.

This tutorial uses Service Mesh and the Bookinfo sample application to demonstrate how you can use Jaeger to perform distributed tracing.

Prerequisites:
  • OpenShift Container Platform 4.1 or higher installed.

  • Red Hat OpenShift Service Mesh 2.0.6 installed.

  • Jaeger enabled during the installation.

  • Bookinfo example application installed.

Procedure
  1. After installing the Bookinfo sample application, send traffic to the mesh. Enter the following command several times.

    $ curl "http://$GATEWAY_URL/productpage"

    This command simulates a user visiting the productpage microservice of the application.

  2. In the OpenShift Container Platform console, navigate to NetworkingRoutes and search for the Jaeger route, which is the URL listed under Location.

    • Alternatively, use the CLI to query for details of the route. In this example, istio-system is the control plane namespace:

      $ export JAEGER_URL=$(oc get route -n istio-system jaeger -o jsonpath='{.spec.host}')
      1. Enter the following command to reveal the URL for the Jaeger console. Paste the result in a browser and navigate to that URL.

        echo $JAEGER_URL
  3. Log in using the same user name and password as you use to access the OpenShift Container Platform console.

  4. In the left pane of the Jaeger dashboard, from the Service menu, select productpage.bookinfo and click the Find Traces button at the bottom of the pane. A list of traces is displayed.

  5. Click one of the traces in the list to open a detailed view of that trace. If you click the first one in the list, which is the most recent trace, you see the details that correspond to the latest refresh of the /productpage.

Adjusting the sampling rate

The distributed tracing sampling rate is set to sample 100% of traces in your service mesh by default. A high sampling rate consumes cluster resources and performance but is useful when debugging issues. Before you deploy Red Hat OpenShift Service Mesh in production, set the value to a smaller proportion of traces.

A trace is an execution path between services in the service mesh. A trace is comprised of one or more spans. A span is a logical unit of work that has a name, start time, and duration.

The sampling rate determines how often a trace is generated. Configure sampling as a scaled integer representing 0.01% increments.

In a basic installation, spec.tracing.sampling is set to 10000, which samples 100% of traces. For example:

  • Setting the value to 10 samples 0.1% of traces.

  • Setting the value to 500 samples 5% of traces.

Setting the value to 10000 is useful for debugging, but can affect performance. For production, set spec.tracing.sampling to 100.

Procedure
  1. In the OpenShift Container Platform web console, click OperatorsInstalled Operators.

  2. Click the Project menu and select the project where you installed the control plane, for example istio-system.

  3. Click the Red Hat OpenShift Service Mesh Operator. In the Istio Service Mesh Control Plane column, click the name of your ServiceMeshControlPlane resource, for example basic.

  4. To adjust the sampling rate, set a different value for spec.tracing.sampling.

    1. Click the YAML tab.

    2. Set the value for spec.tracing.sampling in your ServiceMeshControlPlane resource. In the following example, set it to 100.

      Jaeger sampling example
      spec:
        tracing:
          sampling: 100
    3. Click Save.

  5. Click Reload to verify the ServiceMeshControlPlane resource was configured correctly.

Connecting standalone Jaeger

If you already use standalone Jaeger for distributed tracing in OpenShift Container Platform, configure your ServiceMeshControlPlane resource to use that standalone Jaeger instance rather than the one installed with Red Hat OpenShift Service Mesh.

Prerequisites
  • Configure and deploy a standalone Jaeger instance. For more information, see the Jaeger documentation.

Procedure
  1. In the OpenShift Container Platform web console, click OperatorsInstalled Operators.

  2. Click the Project menu and select the project where you installed the control plane, for example istio-system.

  3. Click the Red Hat OpenShift Service Mesh Operator. In the Istio Service Mesh Control Plane column, click the name of your ServiceMeshControlPlane resource, for example basic.

  4. Add the name of your standalone Jaeger instance to the ServiceMeshControlPlane.

    1. Click the YAML tab.

    2. Add the name of your standalone Jaeger instance to spec.addons.jaeger.name in your ServiceMeshControlPlane resource. In the following example, simple-prod is the name of your standalone Jaeger instance.

      Standalone Jaeger example
      spec:
        addons:
          jaeger:
            name: simple-prod
    3. Click Save.

  5. Click Reload to verify the ServiceMeshControlPlane resource was configured correctly.

For more information about configuring Jaeger, see the Jaeger documentation.

Accessing Grafana

Grafana is an analytics tool that you can use to view, query, and analyze your service mesh metrics. In this example, istio-system is the control plane namespace. To access Grafana, do the following:

Procedure
  1. Log in to the OpenShift Container Platform web console.

  2. Click the Project menu and select the project where you installed the control plane, for example istio-system.

  3. Click Routes.

  4. Click the link in the Location column for the Grafana row.

  5. Log into the Grafana console with your OpenShift Container Platform credentials.

Accessing Prometheus

Prometheus is a monitoring and alerting tool that you can use to collect multi-dimensional data about your microservices. In this example, istio-system is the control plane namespace.

Procedure
  1. Log in to the OpenShift Container Platform web console.

  2. Click the Project menu and select the project where you installed the control plane, for example istio-system.

  3. Click Routes.

  4. Click the link in the Location column for the Prometheus row.

  5. Log into the Prometheus console with your OpenShift Container Platform credentials.