Table of Contents

Product overview

Red Hat OpenShift Service Mesh overview

This release of Red Hat OpenShift Service Mesh is a Technology Preview release only. Technology Preview releases are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete, and Red Hat does NOT recommend using them for production. Using Red Hat OpenShift Service Mesh on a cluster renders the whole OpenShift cluster as a technology preview, that is, in an unsupported state. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information see Red Hat Technology Preview Features Support Scope.

Red Hat OpenShift Service Mesh is a platform that provides behavioral insight and operational control over the service mesh, providing a uniform way to connect, secure, and monitor microservice applications.

The term service mesh describes the network of microservices that make up applications in a distributed microservice architecture and the interactions between those microservices. As a service mesh grows in size and complexity, it can become harder to understand and manage.

Based on the open source Istio project, Red Hat OpenShift Service Mesh adds a transparent layer on existing distributed applications without requiring any changes to the service code. You add Red Hat OpenShift Service Mesh support to services by deploying a special sidecar proxy throughout your environment that intercepts all network communication between microservices. You configure and manage the service mesh using the control plane features.

Red Hat OpenShift Service Mesh provides an easy way to create a network of deployed services that provides discovery, load balancing, service-to-service authentication, failure recovery, metrics, and monitoring. A service mesh also provides more complex operational functionality, including A/B testing, canary releases, rate limiting, access control, and end-to-end authentication.

Red Hat OpenShift Service Mesh product architecture

Red Hat OpenShift Service Mesh is logically split into a data plane and a control plane:

  • The data plane is composed of a set of intelligent proxies deployed as sidecars. These proxies intercept and control all inbound and outbound network communication between microservices in the service mesh; sidecar proxies also communicate with Mixer, the general-purpose policy and telemetry hub.

  • The control plane is responsible for managing and configuring proxies to route traffic, and configuring Mixers to enforce policies and collect telemetry.

The components that make up the data plane and the control plane are:

  • Envoy proxy is the data plane component that intercepts all inbound and outbound traffic for all services in the service mesh. Envoy is deployed as a sidecar to the relevant service in the same pod.

  • Mixer is the control plane component responsible for enforcing access control and usage policies (such as authorization, rate limits, quotas, authentication, request tracing) and collecting telemetry data from the Envoy proxy and other services.

  • Pilot is the control plane component responsible for configuring the proxies at runtime. Pilot provides service discovery for the Envoy sidecars, traffic management capabilities for intelligent routing (for example, A/B tests or canary deployments), and resiliency (timeouts, retries, and circuit breakers).

  • Citadel is the control plane component responsible for certificate issuance and rotation. Citadel provides strong service-to-service and end-user authentication with built-in identity and credential management. You can use Citadel to upgrade unencrypted traffic in the service mesh. Using Citadel, operators can enforce policies based on service identity rather than on network controls.

Supported configurations

The following are the only supported configurations for the Red Hat OpenShift Service Mesh 0.11.TechPreview:

  • Red Hat OpenShift Container Platform version 3.11.

  • Red Hat OpenShift Container Platform version 4.1.

OpenShift Online and OpenShift Dedicated are not supported for Red Hat OpenShift Service Mesh 0.11.TechPreview.

  • The deployment must be contained to a single OpenShift Container Platform cluster (no federation).

  • This release of Red Hat OpenShift Service Mesh is only available on OpenShift Container Platform x86_64.

  • Red Hat OpenShift Service Mesh is only suited for OpenShift Container Platform Software Defined Networking (SDN) configured as a flat network with no external providers.

  • This release supports only configurations where all service mesh components are contained in the OpenShift cluster in which it operates. It does not support management of microservices that reside outside of the cluster, or in a multi-cluster scenario.

  • The Kiali observability console is only supported on the two most recent releases of the Chrome, Edge, Firefox, or Safari browsers.

For more information about support for this technology preview, see this Red Hat Knowledge Base article.

Comparing Red Hat OpenShift Service Mesh and upstream Istio community installations

An installation of Red Hat OpenShift Service Mesh differs from upstream Istio community installations in multiple ways. The modifications to Red Hat OpenShift Service Mesh are sometimes necessary to resolve issues, provide additional features, or to handle differences when deploying on OpenShift.

The current release of Red Hat OpenShift Service Mesh differs from the current upstream Istio community release in the following ways:

Automatic injection

The upstream Istio community installation automatically injects the sidecar to namespaces you have labeled.

Red Hat OpenShift Service Mesh does not automatically inject the sidecar to any namespaces, but requires you to specify the sidecar.istio.io/inject annotation as illustrated in the Automatic sidecar injection section.

Role Based Access Control features

Role Based Access Control (RBAC) provides a mechanism you can use to control access to a service. You can identify subjects by username or by specifying a set of properties and apply access controls accordingly.

The upstream Istio community installation includes options to perform exact header matches, match wildcards in headers, or check for a header containing a specific prefix or suffix.

Upstream Istio community matching request headers example
apiVersion: "rbac.istio.io/v1alpha1"
kind: ServiceRoleBinding
metadata:
  name: httpbin-client-binding
  namespace: httpbin
spec:
  subjects:
  - user: "cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account"
    properties:
      request.headers[<header>]: "value"

Red Hat OpenShift Service Mesh extends the ability to match request headers by using a regular expression. Specify a property key of request.regex.headers with a regular expression.

Red Hat OpenShift Service Mesh matching request headers by using regular expressions
apiVersion: "rbac.istio.io/v1alpha1"
kind: ServiceRoleBinding
metadata:
  name: httpbin-client-binding
  namespace: httpbin
spec:
  subjects:
  - user: "cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account"
    properties:
      request.regex.headers[<header>]: "<regular expression>"

Automatic route creation

Automatic route creation is currently incompatible with multi-tenant Service Mesh installations. Ensure that it is disabled in your ServiceMeshControlPlane if you plan to attempt a multi-tenant installation.

Red Hat OpenShift Service Mesh automatically manages OpenShift routes for Istio gateways. When an Istio gateway is created, updated, or deleted in the Service Mesh, a matching OpenShift route is created, updated, or deleted.

If the following gateway is created:

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: gateway1
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - www.bookinfo.com
    - bookinfo.example.com

The following OpenShift routes are automatically created:

$ oc -n istio-system get routes
NAME              HOST/PORT                                            PATH      SERVICES               PORT      TERMINATION   WILDCARD
gateway1-lvlfn    bookinfo.example.com                                           istio-ingressgateway   <all>                   None
gateway1-scqhv    www.bookinfo.com                                               istio-ingressgateway   <all>                   None

If this gateway is deleted, Red Hat OpenShift Service Mesh will delete the routes.

Manually created routes are not managed by the Service Mesh.

Catch-all domains

Red Hat OpenShift Service Mesh does not support catch-all or wildcard domains. If Service Mesh finds a catch-all domain in the gateway definition, Red Hat OpenShift Service Mesh will create the route but relies on OpenShift to create a default hostname. The route that Service Mesh creates will not be a catch-all route and will have a hostname with a <route-name>[-<namespace>].<suffix> structure.

Subdomains

Subdomains are supported, but they are not enabled by default in OpenShift. Red Hat OpenShift Service Mesh will create the route with the subdomain, but it will only work after you enable subdomains in OpenShift. See the OpenShift documentation on Wildcard Routes for more information.

TLS

OpenShift routes are configured to support TLS.

All OpenShift routes created by Red Hat OpenShift Service Mesh are in the istio-system namespace.

OpenSSL

Red Hat OpenShift Service Mesh replaces BoringSSL with OpenSSL. OpenSSL is a software library that contains an open source implementation of the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols. The Red Hat OpenShift Service Mesh Proxy binary dynamically links the OpenSSL libraries (libssl and libcrypto) from the underlying UBI8 operating system.

Multi-tenant installations

Red Hat OpenShift Service Mesh allows you to configure multi-tenant control plane installations, specify the namespaces that can access its Service Mesh, and isolate the Service Mesh from other control plane instances.

Red Hat OpenShift Service Mesh installation overview

The Red Hat OpenShift Service Mesh installation process creates two different projects (namespaces):

  • istio-operator project (1 pod)

  • istio-system project (17 pods)

You first create a Kubernetes operator. This operator defines and monitors a custom resource that manages the deployment, updating, and deletion of the Service Mesh components.

Depending on how you define the custom resource file, you can install one or more of the following components when you install the Service Mesh:

  • Istio - based on the open source Istio project, lets you connect, secure, control, and observe the microservices that make up your applications.

  • Jaeger - based on the opensource Jaeger project, lets you perform tracing to monitor and troubleshoot transactions in complex distributed systems.

  • Kiali - based on the opensource Kiali project, Kiali provides observability for your service mesh. Using Kiali lets you view configurations, monitor traffic, and view and analyze traces in a single console.

  • Launcher - based on the opensource fabric8 community, this integrated development platform helps you build cloud native applications and microservices. Red Hat OpenShift Service Mesh includes several boosters that let you explore features of the Service Mesh.

During the installation the operator creates an Ansible job that runs an Ansible playbook that performs the following installation and configuration tasks automatically:

  • Creates the istio-system namespace

  • Creates the openshift-ansible-istio-installer-job which installs the following components:

    • Istio components:

      • istio-citadel

      • istio-egressgateway

      • istio-galley

      • istio-ingressgateway

      • istio-pilot

      • istio-policy

      • istio-sidecar-injector

      • istio-telemetry

    • Elasticsearch

    • Grafana

    • Jaeger components:

      • jaeger-agent

      • jaeger-collector

      • jaeger-query

    • Kiali components (if configured in the custom resource definition):

      • Kiali

    • Prometheus

    • 3scale Components (if configured in the custom resource definition):

      • 3scale-istio-adapter

  • Performs the following launcher configuration tasks (if configured in the custom resource definition):

    • Creates a devex project and installs the Fabric8 launcher into that project.

    • Adds the cluster admin role to the OpenShift Container Platform user specified in the launcher parameters in the custom resource file.

Prerequisites

Red Hat OpenShift Service Mesh installation prerequisites

Before you can install Red Hat OpenShift Service Mesh, you must meet the following prerequisites:

  • Possess an active OpenShift Container Platform subscription on your Red Hat account. If you do not have a subscription, contact your sales representative for more information.

  • Install OpenShift Container Platform version 3.11, or higher. For more information about the system and environment requirements, see the OpenShift Container Platform documentation.

  • Install the version of the OpenShift Container Platform command line utility (the oc client tool) that matches your OpenShift Container Platform version and add it to your path. For example, if you have OpenShift Container Platform 3.11 you must have the matching oc client version 3.11. For installation instructions, see the OpenShift Container Platform Command Line Reference document.

Preparing the OpenShift Container Platform installation

Before you can install the Service Mesh into an OpenShift Container Platform installation, you must modify the master configuration and each of the schedulable nodes. These changes enable the features that are required in the Service Mesh and also ensure that Elasticsearch features function correctly.

Updating the node configuration

Updating the node configuration is not necessary if you are running OpenShift Container Platform 4.1.

To run the Elasticsearch application, you must make a change to the kernel configuration on each node. This change is handled through the sysctl service.

Make the following changes on each node within your OpenShift Container Platform installation:

  1. Create a file named /etc/sysctl.d/99-elasticsearch.conf with the following contents:

    vm.max_map_count = 262144
  2. Execute the following command:

    $ sysctl vm.max_map_count=262144

Updating the container registry

If you are running OpenShift Container Platform 3.11 on-premise, follow these steps to configure access to registry.redhat.io.

To access the private registry.redhat.io from {product-name} 3.11 to pull the Red Hat OpenShift Service Mesh images for the installation process:

  1. Run the following command:

    $ docker login registry.redhat.io
  2. This will prompt you for a username and password.

  3. When you successfully log in, the ~/.docker/config.json is created with the following contents:

    {
         "auths": {
             "registry.redhat.io": {
                 "auth": "XXXXXXXXXXXXXXXXXX"
             }
         }
    }
  4. On each node, create a /var/lib/origin/.docker directory.

  5. On each node, copy the /.docker/config.json file to the /var/lib/origin/.docker directory.

Installing Service Mesh

Installing the Red Hat OpenShift Service Mesh

Installing the Service Mesh involves installing the operator, and then creating and managing a custom resource definition file to deploy the control plane.

Starting with Red Hat OpenShift Service Mesh 0.9.TechPreview, Mixer’s policy enforcement is disabled by default. You must enable it to run policy tasks. See Update Mixer policy enforcement for instructions on enabling Mixer policy enforcement.

What is an operator?

An operator is a piece of software that enables you to implement and automate common activities, such as installation and configuration, in your Kubernetes cluster. It acts as a controller, allowing you to set or change the desired state of objects within your cluster.

Installing the operator

The Service Mesh installation process introduces an operator to manage the installation of the control plane within the istio-operator namespace. This operator defines and monitors a custom resource related to the deployment, update, and deletion of the control plane.

You can find the operator templates on GitHub.

The following commands install the Service Mesh operator into an existing OpenShift Container Platform installation; you can run them from any host with access to the cluster. Ensure that you are logged in as a cluster admin before executing these commands.

$ oc new-project istio-operator
$ oc new-project istio-system
$ oc apply -n istio-operator -f https://raw.githubusercontent.com/Maistra/istio-operator/maistra-0.11/deploy/servicemesh-operator.yaml

Verifying operator installation

To verify that the operator is installed correctly, issue the following command:

$ oc get pods -n istio-operator -l name=istio-operator

When the operator reaches a running state, it is installed correctly.

NAME                              READY     STATUS    RESTARTS   AGE
istio-operator-5cd6bcf645-fvb57   1/1       Running   0          1h

Creating a custom resource file

The istio-system namespace is used an example throughout the Service Mesh documentation, but you can use other namespaces as necessary.

To deploy the Service Mesh control plane, you must deploy a custom resource. A custom resource is an object that extends the Kubernetes API, or allows you to introduce your own API into a project or a cluster. You define a custom resource as a yaml file that defines the object, and then you use the yaml file to create the object. The following example contains all of the supported parameters and deploys Red Hat OpenShift Service Mesh 0.11.TechPreview images based on Red Hat Enterprise Linux (RHEL).

The 3scale Istio Adapter is deployed and configured in the custom resource file. It also requires a working 3scale account (SaaS or On-Premises).

Full example istio-installation.yaml
  apiVersion: maistra.io/v1
  kind: ServiceMeshControlPlane
  metadata:
    name: basic-install

    threeScale:
      enabled: false

    istio:
      global:
        proxy:
          resources:
            requests:
              cpu: 100m
              memory: 128Mi
            limits:
              cpu: 500m
              memory: 128Mi

      gateways:
        istio-egressgateway:
          autoscaleEnabled: false
        istio-ingressgateway:
          autoscaleEnabled: false
          ior_enabled: false

      mixer:
        policy:
          autoscaleEnabled: false

        telemetry:
          autoscaleEnabled: false
          resources:
            requests:
              cpu: 100m
              memory: 1G
            limits:
              cpu: 500m
              memory: 4G

      pilot:
        autoscaleEnabled: false
        traceSampling: 100.0

      kiali:
       dashboard:
          user: admin
          passphrase: admin
      tracing:
        enabled: true

      multitenant: false

Custom resource parameters

The following examples illustrate use of the supported custom resource parameters for Red Hat OpenShift Service Mesh and the tables provide additional information about supported parameters.

The resources you configure for Red Hat OpenShift Service Mesh with these custom resource parameters, including CPUs, memory, and the number of pods, are based on the configuration of your OpenShift cluster. Configure these parameters based on the available resources in your current cluster configuration.

Istio global example

In order for the 3scale Istio Adapter to work, disablePolicyChecks must be false.

  istio:
    global:
      hub: `maistra/` or `registry.redhat.io/openshift-istio-tech-preview/`
      tag: 0.11.0
      proxy:
        resources:
          requests:
            cpu: 100m
            memory: 128Mi
          limits:
            cpu: 500m
            memory: 128Mi
      mtls: false
      disablePolicyChecks: true
      policyCheckFailOpen: false
      imagePullSecrets:
        - MyPullSecret

See the OpenShift documentation on Compute Resources for additional details on specifying CPU and memory resources for the containers in your pod.

Table 1. General parameters
Parameter Description Values Default

disablePolicyChecks

This boolean indicates whether to enable policy checks

true/false

true

policyCheckFailOpen

This boolean indicates whether traffic is allowed to pass through to the Envoy sidecar when the Mixer policy service cannot be reached

true/false

false

tag

The tag that the operator uses to pull the Istio images

A valid container image tag

0.11.0

hub

The hub that the operator uses to pull Istio images

A valid image repo

maistra/ or registry.redhat.io/openshift-istio-tech-preview/

mTLS

This controls whether to enable Mutual Transport Layer Security (mTLS) between services by default

true/false

false

Table 2. Proxy parameters
Type Parameter Description Values Default

Resources

cpu

The percentage of CPU resources requested for Envoy proxy

CPU resources in millicores based on your environment’s configuration

100m

memory

The amount of memory requested for Envoy proxy

Available memory in bytes based on your environment’s configuration

128Mi

Limits

cpu

The maximum percentage of CPU resources requested for Envoy proxy

CPU resources in millicores based on your environment’s configuration

2000m

memory

The maximum amount of memory Envoy proxy is permitted to use

Available memory in bytes based on your environment’s configuration

128Mi

imagePullSecret

If access to the registry providing the Istio images is secure, list an imagePullSecret here

{"auths":{"subdomain.example.com":{"auth":"b3BlbnNoaWZ0LXJlbGVhc2UtZGV2K2dlZWtzcGVydGlzZTFlamRidHUwdnZmM2djOXR5b3F4bzM3Nndyejo2TzhCVFkxOVdUOUJTSU1SS0FNODVRUFBQUUQ3NEZUVUUFAQyTUFKMk41T0lLUklLSE5YV01SVlRVTEZaSUs5","email":"user@example.com"},"quay.io":{"auth":"b3BlbnNoaWZ0LXJlbGVhc2UtZGV2K2dlZWtzcGVydGlzZTFlamRidHUwdnZmM2djOXR5b3F4bzM3Nndyejo2TzhCVFkxOVdUOUJTSU1SS0FNODVRUFBQUUQ3NEZVUVVXWFAyTUFKMk41T0lLUklLSE5YV01SVlRVTEZaSUs5","email":"user@example.com"},"registry.connect.redhat.com":{"auth":"NTE5MzM5Mjh8dWhjLTFFSmRiVFUwVnZmM0djOVR5T3FYTzM3NndyejpleUpoYkdjaU9pSlNVelV4TWlKOS5leUp6ZFdJaU9pSTBOMk13TWpjek1EVmpORFEwTW1GaU9EY3dOalJrWmpFME9EYzNaakJtWmlKOS5HaG54eFpHbHF5cmZIUzEtdEpyZVVHQTN0bzhNa2ZFMVdmX2Q4ZHZoa1ZvQlR0R3NheUNUS2RpQmpqZHF2VFVhZ0FzcWtWMFVNZHp3SW1nSTMwRjR0QzFJSzdXU1I1MF9CQzlfS3ExWDRZbzZIaktWTWFFc29KMHFzQU1IN3lYamUzSXdpRV8yLVh1dUJ4Z2VHVzdDV21sYVlNLWVSaGxEOHFUUzE2LVhRY0dQYW1YWjdWbUJib2lCdENCZmQyaktmZ2pSN1ZOTV9sLUh0YXVORURRYWg3VmQzdUZqT3ZmOHFGT1dTeVBrakxoNWE4ZVU5NXJLVHMxaWp6TkJuZV90R2U4WkVfQUxVb2V0SGhfV1M5SE1aeGtnT01FM0FNMFZ2Ml9GX2szc0RiUmt6U1VxLVJ0ZTE2OTJGQmJKY2x6NTUxbXpnRGtJa2lpdzh5X0ViT2E4Z0N2YjNEVU1uZi1RZ2dMVkRSMW5QdWZTSVJ3QTBzTTJZOVFUVTNGTnZCc0o0NmNVRU5uTDRsb1Z1WmhwOWhFVTFTV2NXd0UtZm40ZGVfNVJwN0FwNTJqQnphWTg4OWdFRWtWdXllZmpRX0RPTURGNDd1VDN1SnJ5MDBFVmIzRm40QlRaTVVWTG5iU3I1bkFYU204RU1qMzFOVnZQSzRsS3d5d29WRzZZaEdQX2ZXc1dUcGFHSGVoTkxYMnF2aGJDTy1hYnAyUXRweHo3aHFnY3RuNmpXSVZzWmQtMGhYS3NnX2ppZllfZ18tLW10b3oydHVoU0VBY2xRLU81NEdEQjhfb1RkajlwQWQ2NWY2dWxQaDV4N1IwQXpaZjZCdWtfY1ZRNkh3LXBpT3FlOWpWYlljNS0xVU9peGo4ejRWcXoyN1lTUHBhNGw2ejVsdUt6clNpZnVpUQ==","email":"user@example.com"},"something.example.io":{"auth":"NTE5MzM5Mjh8dWhjLTFFSmRiVFUwVnZmM0djOVR5T3FYTzM3NndyejpleUpoYkdjaU9pSlNVelV4TWlKOS5leUp6ZFdJaU9pSTBOMk13TWpjek1EVmpORFEwTW1GaU9EY3dOalJrPmwFME9EYzNaakJtWmlKOS5HaG54eFpHbHF5cmZIUzEtdEpyZVVHQTN0bzhNa2ZFMVdmX2Q4ZHZoa1ZvQlR0R3NheUNUS2RpQmpqZHF2VFVhZ0FzcWtWMFVNZHp3SW1nSTMwRjR0QzFJSzdXU1I1MF9CQzlfS3ExWDRZbzZIaktWTWFFc29KMHFzQU1IN3lYamUzSXdpRV8yLVh1dUJ4Z2VHVzdDV21sYVlNLWVSaGxEOHFUUzE2LVhRY0dQYW1YWjdWbUJib2lCdENCZmQyaktmZ2pSN1ZOTV9sLUh0YXVORURRYWg3VmQzdUZqT3ZmOHFGT1dTeVBrakxoNWE4ZVU5NXJLVHMxaWp6TkJuZV90R2U4WkVfQUxVb2V0SGhfV1M5SE1aeGtnT01FM0FNMFZ2Ml9GX2szc0RiUmt6U1VxLVJ0ZTE2OTJGQmJKY2x6NTUxbXpnRGtJa2lpdzh5X0ViT2E4Z0N2YjNEVU1uZi1RZ2dMVkRSMW5QdWZTSVJ3QTBzTTJZOVFUVTNGTnZCc0o0NmNVRU5uTDRsb1Z1WmhwOWhFVTFTV2NXd0UtZm40ZGVfNVJwN0FwNTJqQnphWTg4OWdFRWtWdXllZmpRX0RPTURGNDd1VDN1SnJ5MDBFVmIzRm40QlRaTVVWTG5iU3I1bkFYU204RU1qMzFOVnZQSzRsS3d5d29WRzZZaEdQX2ZXc1dUcGFHSGVoTkxYMnF2aGJDTy1hYnAyUXRweHo3aHFnY3RuNmpXSVZzWmQtMGhYS3NnX2ppZllfZ18tLW10b3oydHVoU0VBY2xRLU81NEdEQjhfb1RkajlwQWQ2NWY2dWxQaDV4N1IwQXpaZjZCdWtfY1ZRNkh3LXBpT3FlOWpWYlljNS0xVU9peGo4ejRWcXoyN1lTUHBhNGw2ejVsdUt6clNpZnVpUQ==","email":"user@example.com"}}}

None

Istio gateway example

Automatic route creation does not currently work with multi-tenancy. Set ior_enabled to false for multi-tenant installations.

  gateways:
       istio-egressgateway:
         autoscaleEnabled: false
         autoscaleMin: 1
         autoscaleMax: 5
       istio-ingressgateway:
         autoscaleEnabled: false
         autoscaleMin: 1
         autoscaleMax: 5
         ior_enabled: false
Table 3. Istio Gateway parameters
Type Parameter Description Values Default

istio-egressgateway

autoscaleEnabled

This parameter enables autoscaling.

true/false

true

autoscaleMin

The minimum number of pods to deploy for the egress gateway based on the autoscaleEnabled setting

A valid number of allocatable pods based on your environment’s configuration

1

autoscaleMax

The maximum number of pods to deploy for the egress gateway based on the autoscaleEnabled setting

A valid number of allocatable pods based on your environment’s configuration

5

istio-ingressgateway

autoscaleEnabled

This parameter enables autoscaling.

true/false

true

autoscaleMin

The minimum number of pods to deploy for the ingress gateway based on the autoscaleEnabled setting

A valid number of allocatable pods based on your environment’s configuration

1

autoscaleMax

The maximum number of pods to deploy for the ingress gateway based on the autoscaleEnabled setting

A valid number of allocatable pods based on your environment’s configuration

5

ior_enabled

This parameter controls whether Istio routes are automatically configured in OpenShift

true/false

true

Istio Mixer example
  mixer:
    enabled: true
       policy:
         autoscaleEnabled: false

       telemetry:
         autoscaleEnabled: false
         resources:
           requests:
             cpu: 100m
             memory: 1G
           limits:
             cpu: 500m
             memory: 4G
Table 4. Istio Mixer policy parameters
Parameter Description Values Default

enabled

This enables Mixer

true/false

true

autoscaleEnabled

This controls whether to enable autoscaling. Disable this for small environments.

true/false

true

autoscaleMin

The minimum number of pods to deploy based on the autoscaleEnabled setting

A valid number of allocatable pods based on your environment’s configuration

1

autoscaleMax

The maximum number of pods to deploy based on the autoscaleEnabled setting

A valid number of allocatable pods based on your environment’s configuration

5

Table 5. Istio Mixer telemetry parameters
Type Parameter Description Values Default

Resources

cpu

The percentage of CPU resources requested for Mixer telemetry

CPU resources in millicores based on your environment’s configuration

1000m

memory

The amount of memory requested for Mixer telemetry

Available memory in bytes based on your environment’s configuration

1G

Limits

cpu

The maximum percentage of CPU resources Mixer telemetry is permitted to use

CPU resources in millicores based on your environment’s configuration

4800m

memory

The maximum amount of memory Mixer telemetry is permitted to use

Available memory in bytes based on your environment’s configuration

4G

Istio Pilot example
  pilot:
    resources:
      requests:
        cpu: 100m
     autoscaleEnabled: false
     traceSampling: 100.0
Table 6. Istio Pilot parameters
Parameter Description Values Default

cpu

The percentage of CPU resources requested for Pilot

CPU resources in millicores based on your environment’s configuration

500m

memory

The amount of memory requested for Pilot

Available memory in bytes based on your environment’s configuration

2048Mi

traceSampling

This value controls how often random sampling occurs. Note: increase for development or testing.

A valid number

1.0

Tracing
Table 7. Tracing parameters
Parameter Description Value Default

enabled

This enables tracing in the environment

true/false

true

Kiali example

Kiali supports Oath authentication and dashboard users. By default, Kiali uses OpenShift Oauth, but you can enable a dashboard user by adding a dashboard user and passphrase.

  kiali:
     enabled: true
     hub: kiali/
     tag: v0.20.0
     dashboard:
       user: admin
       passphrase: admin
Table 8. Kiali parameters
Parameter Description Values Default

enabled

This enables or disables Kiali in Service Mesh. Kiali is installed by default. If you do not want to install Kiali, change the enabled value to false.

true/false

true

hub

The hub that the operator uses to pull Kiali images

A valid image repo

kiali/ or registry.redhat.io/openshift-istio-tech-preview/

tag

The tag that the operator uses to pull the Istio images

A valid container image tag

0.20.0

user

The username to access the Kiali console. Note: This is not related to any OpenShift account.

A valid Kiali dashboard username

None

passphrase

The password used to access the Kiali console. Note: This is not related to any OpenShift account.

A valid Kiali dashboard passphrase

None

3scale example
  threescale:
      enabled: true
      PARAM_THREESCALE_LISTEN_ADDR: 3333
      PARAM_THREESCALE_LOG_LEVEL: info
      PARAM_THREESCALE_LOG_JSON: true
      PARAM_THREESCALE_REPORT_METRICS: true
      PARAM_THREESCALE_METRICS_PORT: 8080
      PARAM_THREESCALE_CACHE_TTL_SECONDS: 300
      PARAM_THREESCALE_CACHE_REFRESH_SECONDS: 180
      PARAM_THREESCALE_CACHE_ENTRIES_MAX: 1000
      PARAM_THREESCALE_CACHE_REFRESH_RETRIES: 1
      PARAM_THREESCALE_ALLOW_INSECURE_CONN: false
      PARAM_THREESCALE_CLIENT_TIMEOUT_SECONDS: 10
Table 9. 3scale parameters
Parameter Description Values Default

enabled

Whether to use the 3scale adapter

true/false

false

PARAM_THREESCALE_LISTEN_ADDR

Sets the listen address for the gRPC server

Valid port number

3333

PARAM_THREESCALE_LOG_LEVEL

Sets the minimum log output level.

debug, info, warn, error, or none

info

PARAM_THREESCALE_LOG_JSON

Controls whether the log is formatted as JSON

true/false

true

PARAM_THREESCALE_REPORT_METRICS

Controls whether 3scale system and backend metrics are collected and reported to Prometheus

true/false

true

PARAM_THREESCALE_METRICS_PORT

Sets the port that the 3scale /metrics endpoint can be scrapped from

Valid port number

8080

PARAM_THREESCALE_CACHE_TTL_SECONDS

Time period, in seconds, to wait before purging expired items from the cache

Time period in seconds

300

PARAM_THREESCALE_CACHE_REFRESH_SECONDS

Time period before expiry when cache elements are attempted to be refreshed

Time period in seconds

180

PARAM_THREESCALE_CACHE_ENTRIES_MAX

Max number of items that can be stored in the cache at any time. Set to 0 to disable caching

Valid number

1000

PARAM_THREESCALE_CACHE_REFRESH_RETRIES

The number of times unreachable hosts are retried during a cache update loop

Valid number

1

PARAM_THREESCALE_ALLOW_INSECURE_CONN

Allow to skip certificate verification when calling 3scale APIs. Enabling this is not recommended.

true/false

false

PARAM_THREESCALE_CLIENT_TIMEOUT_SECONDS

Sets the number of seconds to wait before terminating requests to 3scale System and Backend

Time period in seconds

10

Configuring multi-tenant installations

See the Multi-tenant Red Hat OpenShift Service Mesh install chapter for instructions on installing and configuring a Service Mesh instance.

Update Mixer policy enforcement

In previous versions of Red Hat OpenShift Service Mesh, Mixer’s policy enforcement was enabled by default. Mixer policy enforcement is now disabled by default. You must enable it before running policy tasks.

To check the current Mixer policy enforcement status, run the following command:

$ oc get cm -n istio-system istio -o jsonpath='{.data.mesh}' | grep disablePolicyChecks

If disablePolicyChecks: true, follow these steps to enable policy enforcement in Mixer:

  1. Edit the Service Mesh ConfigMap:

    $ oc edit cm -n istio-system istio
  2. Locate disablePolicyChecks: true within the ConfigMap and change the value to false.

  3. Save the configuration and exit the editor.

  4. Re-check the Mixer policy enforcement status to ensure it is set to false.

Deploying the control plane

With the introduction of OpenShift Container Platform 4.1, the network capabilities of the host are now based on nftables rather than iptables. This change impacts the initialization of the Service Mesh application components. Service Mesh needs to know what host operating system OpenShift is running on to correctly initialize Service Mesh networking components.

If the OpenShift installation is deployed on a Red Hat Enterprise Linux (RHEL) 7 host, then the custom resource must explicitly request the RHEL 7 proxy-init container image by including the following:

Enabling the proxy-init container for RHEL 7 hosts
  apiVersion: maistra.io/v1
   kind: ServiceMeshControlPlane
   spec:
     istio:
       global:
         proxy_init:
           image: proxy-init

Use the custom resource definition file you created to deploy the Service Mesh control plane. To deploy the control plane, run the following command:

$ oc create -n istio-system -f istio-installation.yaml

The operator creates the istio-system namespace and runs the installer job; this job installs and configures the control plane. You can follow the progress of the installation by watching the pods.

To watch the progress of the pods, run the following command:

$ oc get pods -n istio-system -w

Multi-tenant Service Mesh Installation

Multi-tenant Red Hat OpenShift Service Mesh installation

The Red Hat OpenShift Service Mesh operator provides support for multi-tenant control plane installations. A multi-tenant control plane is configured so that only specified namespaces can be joined into its Service Mesh, isolating the mesh from other installations.

You cannot use multi-tenant control plane installations in conjunction with a cluster-wide control plane installation. Red Hat OpenShift Service Mesh installations must either be multi-tenant or single, cluster-wide installations.

Automatic route creation is currently incompatible with multi-tenant Service Mesh installations. Ensure that it is disabled, by setting ior_enabled to false in your ServiceMeshControlPlane if you plan to attempt a multi-tenant installation.

Known issues

  • MeshPolicy is still a cluster-scoped resource and applies to all control planes installed in OpenShift. This can prevent the installation of multiple control planes or cause unknown behavior if one control plane is deleted.

  • The Jaeger agent runs as a DaemonSet, therefore tracing may only be enabled for a single ServiceMeshControlPlane instance.

  • If you delete the project that contains the control plane before you delete the ServiceMeshControlPlane resource, some parts of the installation may not be removed:

    • Service accounts added to the SecurityContextConstraints may not be removed.

    • OAuthClient resources associated with Kiali may not be removed, or its list of redirectURIs may not be accurate.

Comparison of multi-tenant and cluster-wide installations

The main difference between a multi-tenant installation and a cluster-wide installation is the scope of privileges used by the control plane deployments, for example, Galley and Pilot. The components no longer use cluster-scoped Role Based Access Control (RBAC) ClusterRoleBinding, but rely on namespace-scoped RBAC RoleBinding. Every namespace in the members list will have a RoleBinding for each service account associated with a control plane deployment and each control plane deployment will only watch those member namespaces. Each member namespace has a maistra.io/member-of label added to it, where the member-of value is the namespace containing the control plane installation.

Configuring a multi-tenant installation

You can configure a multi-tenant installation by setting the multitenant option to true in the istio section of the ServiceMeshControlPlane resource. For example,

Multi-tenant custom resource example
  apiVersion: maistra.io/v1
  kind: ServiceMeshControlPlane
  spec:
    istio:
      # enable multitenant
            multitenant: true
       # additional control plane configuration details

Configuring Red Hat OpenShift Service Mesh namespaces in multi-tenant installations

A multi-tenant control plane installation only affects namespaces configured as part of the Service Mesh. You must specify the namespaces associated with the Service Mesh in a ServiceMeshMemberRoll resource located in the same namespace as the ServiceMeshControlPlane resource and name it default. Here is an example that joins the bookinfo namespace into the Service Mesh:

Namespace configuration example
  apiVersion: maistra.io/v1
  kind: ServiceMeshMemberRoll
  metadata:
    name: default
  spec:
    members:
    # a list of namespaces joined into the service mesh
    - bookinfo

You can add any number of namespaces, but a namespace can only belong to one ServiceMeshMemberRoll.

ServiceMeshMemberRoll resources are reconciled in response to the following events:

  • The ServiceMeshMemberRoll is created, updated, or deleted

  • The ServiceMeshControlPlane resource in the namespace containing the ServiceMeshMemberRoll is created or updated

  • A namespace listed in the ServiceMeshMemberRoll is created or deleted

The ServiceMeshMemberRoll is deleted when its corresponding ServiceMeshControlPlane resource is deleted.

The member namespaces are only updated if the control plane installation is successful.

Post installation tasks

Verifying the installation

Run the following command to determine if the operator finished deploying the control plane.

The name of the resource is istio-installation.

$ oc get controlplane/istio-installation -n istio-system --template='{{range .status.conditions}}{{printf "%s=%s, reason=%s, message=%s\n\n" .type .status .reason .message}}{{end}}'

When the control plane installation is finished, the output is similar to the following:

Installed=True, reason=InstallSuccessful, message=%!s(<nil>)

After the control plane is deployed, issue the following command to check the status of the pods:

$ oc get pods -n istio-system

Verify that the pods are in a state similar to this:

NAME                                          READY     STATUS      RESTARTS   AGE
3scale-istio-adapter-7df4db48cf-sc98s         1/1       Running     0          13s
elasticsearch-0                               1/1       Running     0          29s
grafana-c7f5cc6b6-vg6db                       1/1       Running     0          33s
istio-citadel-d6d6bb7bb-jgfwt                 1/1       Running     0          1m
istio-egressgateway-69448cf7dc-b2qj5          1/1       Running     0          1m
istio-galley-f49696978-q949d                  1/1       Running     0          1m
istio-ingressgateway-7759647fb6-pfpd5         1/1       Running     0          1m
istio-pilot-7595bfd696-plffk                  2/2       Running     0          1m
istio-policy-779454b878-xg7nq                 2/2       Running     2          1m
istio-sidecar-injector-6655b6ffdb-rn69r       1/1       Running     0          1m
istio-telemetry-dd9595888-8xjz2               2/2       Running     2          1m
jaeger-agent-gmk72                            1/1       Running     0          25s
jaeger-collector-7f644df9f5-dbzcv             1/1       Running     1          25s
jaeger-query-6f47bf4777-h4wmh                 1/1       Running     1          25s
kiali-7cc48b6cbb-74gcf                        1/1       Running     0          17s
prometheus-5f9fd67f8-r6b86                    1/1       Running     0          1m

If you also installed the Fabric8 launcher, monitor the containers within the devex project until the following state is reached:

NAME                          READY     STATUS    RESTARTS   AGE
configmapcontroller-1-8rr6w   1/1       Running   0          1m
launcher-backend-2-2wg86      1/1       Running   0          1m
launcher-frontend-2-jxjsd     1/1       Running   0          1m

Application requirements

Requirements for deploying applications on Red Hat OpenShift Service Mesh

When deploying an application into the Service Mesh there are several differences between the behavior of the upstream community version of Istio and the behavior within a Red Hat OpenShift Service Mesh installation.

Configuring security constraints for application service accounts

When deploying an application into a Service Mesh running in an OpenShift environment, it is currently necessary to relax the security constraints placed on the application by its service account to ensure the application can function correctly. Each service account must be granted permissions with the anyuid and privileged Security Context Constraints (SCC) to enable the sidecars to run correctly.

The privileged SCC is required to ensure changes to the pod’s networking configuration is updated successfully with the istio-init initialization container and the anyuid SCC is required to enable the sidecar container to run with its required user id of 1337.

To configure the correct permissions it is necessary to identify the service accounts being used by your application’s pods. For most applications, this will be the default service account, however your Deployment/DeploymentConfig may override this within the pod specification by providing the serviceAccountName.

For each identified service account you must update the cluster configuration to ensure they are granted access to the anyuid and privileged SCCs by executing the following commands from an account with cluster admin privileges. Replace <service account> and <namespace> with values specific to your application.

$ oc adm policy add-scc-to-user anyuid -z <service account> -n <namespace>
$ oc adm policy add-scc-to-user privileged -z <service account> -n <namespace>

The relaxing of security constraints is only necessary during the Red Hat OpenShift Service Mesh Technology Preview.

Updating the master configuration

Master configuration updates are not necessary if you are running OpenShift Container Platform 4.1.

Service Mesh relies on the existence of a proxy sidecar within the application’s pod to provide service mesh capabilities to the application. You can enable automatic sidecar injection or manage it manually. We recommend automatic injection by using the annotation, with no need to label namespaces, to ensure your application contains the appropriate configuration for your service mesh upon deployment. This method requires fewer privileges and does not conflict with other OpenShift capabilities such as builder pods.

The upstream version of Istio injects the sidecar by default if you have labeled the namespace. You are not required to label the namespace with Red Hat OpenShift Service Mesh. However, Red Hat OpenShift Service Mesh requires you to opt in to having the sidecar automatically injected to a deployment. This avoids injecting a sidecar where it is not wanted (for example, build or deploy pods). The webhook checks the configuration of pods deploying into all namespaces to see if they are opting in to injection with the appropriate annotation.

To enable the automatic injection of the Service Mesh sidecar you must first modify the master configuration on each master to include support for webhooks and signing of Certificate Signing Requests (CSRs).

Make the following changes on each master within your OpenShift Container Platform installation:

  1. Change to the directory containing the master configuration file (for example, /etc/origin/master/master-config.yaml).

  2. Create a file named master-config.patch with the following contents:

    admissionConfig:
      pluginConfig:
        MutatingAdmissionWebhook:
          configuration:
            apiVersion: apiserver.config.k8s.io/v1alpha1
            kubeConfigFile: /dev/null
            kind: WebhookAdmission
        ValidatingAdmissionWebhook:
          configuration:
            apiVersion: apiserver.config.k8s.io/v1alpha1
            kubeConfigFile: /dev/null
            kind: WebhookAdmission
  3. In the same directory, issue the following commands to apply the patch to the master-config.yaml file:

    $ cp -p master-config.yaml master-config.yaml.prepatch
    $ oc ex config patch master-config.yaml.prepatch -p "$(cat master-config.patch)" > master-config.yaml
    $ /usr/local/bin/master-restart api && /usr/local/bin/master-restart controllers
Automatic sidecar injection

When deploying an application into the Red Hat OpenShift Service Mesh you must opt in to injection by specifying the sidecar.istio.io/inject annotation with a value of true. The decision to opt in is required to ensure the sidecar injection does not interfere with other OpenShift features such as builder pods used by numerous frameworks within the OpenShift ecosystem.

This example shows the annotation used within the sleep test application. The additional sidecar containers are included when this configuration is deployed within an Red Hat OpenShift Service Mesh installation.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: sleep
spec:
  replicas: 1
  template:
    metadata:
      annotations:
        sidecar.istio.io/inject: "true"
      labels:
        app: sleep
    spec:
      containers:
      - name: sleep
        image: tutum/curl
        command: ["/bin/sleep","infinity"]
        imagePullPolicy: IfNotPresent
Manual sidecar injection

When you use manual sidecar injection, ensure you have access to a running cluster so the correct configuration can be obtained from the istio-sidecar-injector configmap within the istio-system namespace.

Manual injection of the sidecar is supported by using the upstream istioctl command. To obtain the executable and deploy an application with manual injection:

  • Download the appropriate installation for your OS

  • Unpack the installation into a directory and include the bin directory in your PATH

After installation, you can inject the sidecar into your application by executing the following command:

$ istioctl kube-inject -f app.yaml | oc create -f -

This command injects the containers into the application’s yaml configuration and pipes the modified configuration to the oc command to create the deployments.

Tutorials

There are several tutorials to help you learn more about the Service Mesh.

Bookinfo tutorial

The upstream Istio project has an example tutorial called bookinfo, which is composed of four separate microservices used to demonstrate various Istio features. The Bookinfo application displays information about a book, similar to a single catalog entry of an online book store. Displayed on the page is a description of the book, book details (ISBN, number of pages and other information), and book reviews.

The Bookinfo application consists of four separate microservices:

  • The productpage microservice calls the details and reviews microservices to populate the page.

  • The details microservice contains book information.

  • The reviews microservice contains book reviews. It also calls the ratings microservice.

  • The ratings microservice contains book ranking information that accompanies a book review.

There are three versions of the reviews microservice:

  • Version v1 does not call the ratings service.

  • Version v2 calls the ratings service and displays each rating as one to five black stars.

  • Version v3 calls the ratings service and displays each rating as one to five red stars.

Installing the Bookinfo application

The following steps describe deploying and running the Bookinfo tutorial on OpenShift Container Platform with Service Mesh 0.11.TechPreview.

Prerequisites:

  • OpenShift Container Platform 3.11 or higher installed.

  • Red Hat OpenShift Service Mesh 0.11.TechPreview installed.

Red Hat OpenShift Service Mesh implements auto-injection differently than the upstream Istio project, therefore this procedure uses a version of the bookinfo.yaml file annotated to enable automatic injection of the Istio sidecar.

  1. Create a project for the Bookinfo application.

    $ oc new-project myproject
  2. Update the Security Context Constraints (SCC) by adding the service account used by Bookinfo to the anyuid and privileged SCCs in the "myproject" namespace:

    $ oc adm policy add-scc-to-user anyuid -z default -n myproject
    $ oc adm policy add-scc-to-user privileged -z default -n myproject
  3. Deploy the Bookinfo application in the "myproject" namespace by applying the bookinfo.yaml file:

    $ oc apply -n myproject -f https://raw.githubusercontent.com/Maistra/bookinfo/master/bookinfo.yaml
  4. Create the ingress gateway for Bookinfo by applying the bookinfo-gateway.yaml file:

      $ oc apply -n myproject -f https://raw.githubusercontent.com/Maistra/bookinfo/master/bookinfo-gateway.yaml
  5. Set the value for the GATEWAY_URL parameter:

    $ export GATEWAY_URL=$(oc get route -n istio-system istio-ingressgateway -o jsonpath='{.spec.host}')

Verifying the Bookinfo installation

To confirm that the application is successfully deployed, run this command:

$ curl -o /dev/null -s -w "%{http_code}\n" http://$GATEWAY_URL/productpage

Alternatively, you can open http://$GATEWAY_URL/productpage in your browser.

Add default destination rules

  1. If you did not enable mutual TLS:

    $ oc apply -n myproject -f https://raw.githubusercontent.com/istio/istio/release-1.1/samples/bookinfo/networking/destination-rule-all.yaml
  2. If you enabled mutual TLS:

    oc apply -n myproject -f https://raw.githubusercontent.com/istio/istio/release-1.1/samples/bookinfo/networking/destination-rule-all-mtls.yaml
  3. To list all available destination rules:

    $ oc get destinationrules -o yaml

Removing the Bookinfo application

When you finish with the Bookinfo application, you can remove it by running the cleanup script.

Several of the other tutorials in this document also use the Bookinfo application. Do not run the cleanup script if you plan to continue with the other tutorials.

  1. Download the cleanup script:

    $ curl -o cleanup.sh https://raw.githubusercontent.com/Maistra/bookinfo/master/cleanup.sh && chmod +x ./cleanup.sh
  2. Delete the Bookinfo virtualservice, gateway, and terminate the pods by running the cleanup script:

    $ ./cleanup.sh
    namespace ? [default] myproject
  3. Confirm shutdown by running these commands:

    $ oc get virtualservices -n myproject
    No resources found.
    $ oc get gateway -n myproject
    No resources found.
    $ oc get pods -n myproject
    No resources found.

Distributed tracing tutorial

Jaeger is an open source distributed tracing system. You use Jaeger for monitoring and troubleshooting microservices-based distributed systems. Using Jaeger you can perform a trace, which follows the path of a request through various microservices that make up an application. Jaeger is installed by default as part of the Service Mesh.

This tutorial uses Service Mesh and the bookinfo tutorial to demonstrate how you can perform a trace using the Jaeger component of Red Hat OpenShift Service Mesh.

Prerequisites:

  • OpenShift Container Platform 3.11 or higher installed.

  • Red Hat OpenShift Service Mesh 0.11.TechPreview installed.

  • Bookinfo demonstration application installed.

Generating traces and analyzing trace data

  1. After you have deployed the Bookinfo application, generate some activity by accessing http://$GATEWAY_URL/productpage and refreshing the page a few times.

  2. A route to access the Jaeger dashboard already exists. Query for details of the route:

    $ export JAEGER_URL=$(oc get route -n istio-system jaeger-query -o jsonpath='{.spec.host}')
  3. Launch a browser and navigate to https://${JAEGER_URL}.

  4. In the left pane of the Jaeger dashboard, from the Service menu, select "productpage" and click the Find Traces button at the bottom of the pane. A list of traces is displayed, as shown in the following image:

    jaeger main screen
  5. Click one of the traces in the list to open a detailed view of that trace. If you click on the top (most recent) trace, you see the details that correspond to the latest refresh of the `/productpage.

    jaeger spans

    The trace in the previous figure consists of a few nested spans, each corresponding to a Bookinfo service call, all performed in response to a `/productpage request. Overall processing time was 2.62s, with the details service taking 3.56ms, the reviews service taking 2.6s, and the ratings service taking 5.32ms. Each of the calls to remote services is represented by a client-side and server-side span. For example, the details client-side span is labeled productpage details.myproject.svc.cluster.local:9080. The span nested underneath it, labeled details details.myproject.svc.cluster.local:9080, corresponds to the server-side processing of the request. The trace also shows calls to istio-policy, which reflect authorization checks made by Istio.

Removing the tracing tutorial

Follow the procedure for removing the Bookinfo tutorial to remove the Tracing tutorial.

Prometheus tutorial

Prometheus is an open source system and service monitoring toolkit. Prometheus collects metrics from configured targets at specified intervals, evaluates rule expressions, displays the results, and can trigger alerts if some condition is observed to be true. Grafana or other API consumers can be used to visualize the collected data.

This tutorial uses Service Mesh and the bookinfo tutorial to demonstrate how you can query for metrics using Prometheus.

Prerequisites:

  • OpenShift Container Platform 3.11 or higher installed.

  • Red Hat OpenShift Service Mesh 0.11.TechPreview installed.

  • Bookinfo demonstration application installed.

Querying metrics

  1. Verify that the prometheus service is running in your cluster. In Kubernetes environments, execute the following command:

    $ oc get svc prometheus -n istio-system

    You will see something like the following:

    NAME         CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
    prometheus   10.59.241.54   <none>        9090/TCP   2m
  2. Generate network traffic by accessing the Bookinfo application:

    $ curl -o /dev/null http://$GATEWAY_URL/productpage
  3. A route to access the Prometheus user interface already exists. Query for details of the route:

    $ export PROMETHEUS_URL=$(oc get route -n istio-system prometheus -o jsonpath='{.spec.host}')
  4. Launch a browser and navigate to http://${PROMETHEUS_URL}. You will see the Prometheus home screen, similar to the following figure:

    prometheus home screen
  5. In the Expression field, enter istio_request_duration_seconds_count, and click the Execute button. You will see a screen similar to the following figure:

    prometheus metrics
  6. You can narrow down queries by using selectors. For example istio_request_duration_seconds_count{destination_workload="reviews-v2"} shows only counters with the matching destination_workload label. For more information about using queries, see the Prometheus documentation.

  7. To list all available Prometheus metrics, run the following command:

    $ oc get prometheus -n istio-system -o jsonpath='{.items[*].spec.metrics[*].name}' requests_total request_duration_seconds request_bytes response_bytes tcp_sent_bytes_total tcp_received_bytes_total

Note that returned metric names must be prepended with istio_ when used in queries, for example, requests_total is istio_requests_total.

Removing the Prometheus tutorial

Follow the procedure for removing the Bookinfo tutorial to remove the Prometheus tutorial.

Kiali tutorial

Kiali works with Istio to visualize your service mesh topology to provide visibility into features like circuit breakers, request rates, and more. Kiali offers insights about the mesh components at different levels, from abstract Applications to Services and Workloads. Kiali provides an interactive graph view of your namespace in real time. It can display the interactions at several levels (applications, versions, workloads) with contextual information and charts on the selected graph node or edge.

This tutorial uses Service Mesh and the bookinfo tutorial to demonstrate how you can use the Kiali console to view the topography and health of your service mesh.

Prerequisites:

  • OpenShift Container Platform 3.11 or higher installed.

  • Red Hat OpenShift Service Mesh 0.11.TechPreview installed.

  • Kiali parameters specified in the custom resource file.

  • Bookinfo demonstration application installed.

Accessing the Kiali console

  1. A route to access the Kiali console already exists. Run the following command to obtain the route and Kiali URL:

    $ oc get routes

    While your exact environment may be different, you should see a result that’s something like this:

    NAME                   HOST/PORT                                                PATH      SERVICES               PORT              TERMINATION   WILDCARD
    grafana                grafana-istio-system.127.0.0.1.nip.io                          grafana                http                            None
    istio-ingress          istio-ingress-istio-system.127.0.0.1.nip.io                    istio-ingress          http                            None
    istio-ingressgateway   istio-ingressgateway-istio-system.127.0.0.1.nip.io             istio-ingressgateway   http                            None
    jaeger-query           jaeger-query-istio-system.127.0.0.1.nip.io                     jaeger-query           jaeger-query      edge          None
    kiali                  kiali-istio-system.127.0.0.1.nip.io                            kiali                  <all>                           None
    prometheus             prometheus-istio-system.127.0.0.1.nip.io                       prometheus             http-prometheus                 None
    tracing                tracing-istio-system.127.0.0.1.nip.io                          tracing                tracing           edge          None
  2. Launch a browser and navigate to https://${KIALI_URL} (in the output above, this is kiali-istio-system.127.0.0.1.nip.io). You should see the Kiali console login screen.

    Login Page

    Log in to the Kiali console using the user name and password that you specified in the custom resource file during installation.

Overview page

After you log in you see the Overview page, which provides you with a quick overview of the health of the various namespaces in your system.