Product overview

Red Hat OpenShift Service Mesh overview

This release of Red Hat OpenShift Service Mesh is a Technology Preview release only. Technology Preview releases are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete, and Red Hat does NOT recommend using them for production. Using Red Hat OpenShift Service Mesh on a cluster renders the whole OpenShift cluster as a technology preview, that is, in an unsupported state. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information see Red Hat Technology Preview Features Support Scope.

Red Hat OpenShift Service Mesh is a platform that provides behavioral insights and operational control over the service mesh, providing a uniform way to connect, secure, and monitor microservice applications.

The term service mesh is often used to describe the network of microservices that make up applications based on a distributed microservice architecture and the interactions between those microservices. As a service mesh grows in size and complexity, it can become harder to understand and manage.

Based on the open source Istio project, Red Hat OpenShift Service Mesh layers transparently onto existing distributed applications, without requiring any changes in service code. You add Red Hat OpenShift Service Mesh support to services by deploying a special sidecar proxy throughout your environment that intercepts all network communication between microservices. You configure and manage the service mesh using the control plane features.

This provides an easy way to create a network of deployed services that provides discovery, load balancing, service-to-service authentication, failure recovery, metrics, and monitoring. A service mesh also provide more complex operational requirements, like A/B testing, canary releases, rate limiting, access control, and end-to-end authentication.

Red Hat OpenShift Service Mesh product architecture

Red Hat OpenShift Service Mesh is logically split into a data plane and a control plane:

  • The data plane is composed of a set of intelligent proxies deployed as sidecars. These proxies intercept and control all inbound and outbound network communication between microservices in the service mesh, and also communicate with Mixer, the general-purpose policy and telemetry hub.

  • The control plane is responsible for managing and configuring proxies to route traffic, and configuring Mixers to enforce policies and collect telemetry.

The components that make up the data plane and the control plane are as follows:

  • Envoy proxy is the data plane component which intercepts all inbound and outbound traffic for all services in the service mesh. Envoy is deployed as a sidecar to the relevant service in the same pod.

  • Mixer is the control plane component responsible responsible for enforcing access control and usage policies (such as authorization, rate limits, quotas, authentication, request tracing) and collecting telemetry data from the Envoy proxy and other services.

  • Pilot - is the control plane component responsible for configuring the proxies at runtime. Pilot provides service discovery for the Envoy sidecars, traffic management capabilities for intelligent routing (for example, A/B tests, canary deployments, etc.), and resiliency (timeouts, retries, circuit breakers, etc.).

  • Citadel - is the control plane component which is responsible for certificate issuance and rotation. Citadel provides strong service-to-service and end-user authentication with built-in identity and credential management. You can use Citadel to upgrade unencrypted traffic in the service mesh. Using Citadel, operators can enforce policies based on service identity rather than on network controls.

Supported configurations

The following are the only supported configurations for Red Hat OpenShift Service Mesh 1.0.TechPreview:

  • Red Hat OpenShift Container Platform version 3.10

    OpenShift Online and OpenShift Dedicated are not supported for Red Hat OpenShift Service Mesh 1.0.TechPreview.

  • The deployment should be contained to a single OpenShift Container Platform cluster (no federation).

  • This release of Red Hat OpenShift Service Mesh is only available on OpenShift Container Platform x86_64.

  • Red Hat OpenShift Service Mesh is only suited for OpenShift Container Platform Software Defined Networking (SDN) configured as a flat network with no external providers.

  • This release only supports configurations where all Service Mesh components are contained in the OpenShift cluster in which it operates. It does not support management of microservices which reside outside of the cluster, or in a multi-cluster scenario.

Red Hat OpenShift Service Mesh installation overview

The Red Hat OpenShift Service Mesh installation process creates two different projects (namespaces):

  • istio-operator project (1 pod)

  • istio-system project (16 pods)

You first create the istio-operator project and create a Kubernetes operator. This operator defines and monitors a custom resource that manages the deployment, updating, and deletion of the Service Mesh components.

The operator creates an Ansible job that runs an Ansible playbook that performs the following installation and configuration tasks automatically:

  • Creates the istio-system namespace

  • Creates the openshift-ansible-istio-installer-job which installs the following Istio components:

    • istio-citadel

    • istio-egressgateway

    • istio-galley

    • istio-ingressgateway

    • istio-pilot

    • istio-policy

    • istio-sidecar-injector

    • istio-statsd-prom-bridge

    • istio-telemetry

  • Installs Elasticsearch

  • Installs Grafana

  • Installs the following Jaeger components (if configured in the custom resource definition):

    • jaeger-agent

    • jaeger-collector

    • jaeger-query

  • Installs Prometheus

  • Performs the following launcher configuration tasks (if configured in the custom resource definition):

    • Creates a devex project and installs the Fabric8 launcher into that project.

    • Adds the cluster admin role to the OpenShift user specified in the launcher parameters in the custom resource file.

Prerequisites

Red Hat OpenShift Service Mesh installation prerequisites

Before you can install Red Hat OpenShift Service Mesh, you should complete the following prerequisites:

  • Possess an active OpenShift Container Platform subscription on your Red Hat account. If you do not have a subscription, contact your sales representative for more information.

  • Installed OpenShift Container Platform version 3.10 or higher. For more information about the system and environment requirements, see the OpenShift Container Platform documentation.

  • Installed the version of the OpenShift Container Platform command line utility (the oc client tool) that matches your OpenShift Container Platform version (for example if you have OpenShift Container Platform 3.10 you must have the matching oc client version 3.10), and added it to your path. For installation instructions, see the OpenShift Container Platform Command Line Reference document.

Creating a custom resource file

In order to deploy the Service Mesh control plane, you need to deploy a custom resource. A custom resource is an object that extends the Kubernetes API or allows you to introduce your own API into a project or a cluster. You define a custom resource as a yaml file that defines the object. Then you use the yaml file to create the object. The complete example below contains all of the supported parameters and will deploy Red Hat OpenShift Service Mesh 1.0.TechPreview images based on Red Hat Enterprise Linux (RHEL).

Deploying an istio-installation file that includes all of the parameters ensures that you have installed all of the Istio components required to complete the tutorials included in this document.

Full example istio-installation.yaml
apiVersion: "istio.openshift.com/v1alpha1"
kind: "Installation"
metadata:
  name: "istio-installation"
spec:
  deployment_type: openshift
  istio:
    authentication: true
    community: false
    prefix: openshift-istio-tech-preview/
    version: 0.1.0
  jaeger:
    prefix: distributed-tracing-tech-preview/
    version: 1.6.0
    elasticsearch_memory: 1Gi
  launcher:
    openshift:
      user: user
      password: password
    github:
      username: username
      token: token
    catalog:
      filter: filter
      branch: branch
      repo: repo

The following is the minimum required to install the control plane. This minimal example custom resource will deploy the CentOS-based community Istio images.

Minimum example istio-installation.yaml
apiVersion: "istio.openshift.com/v1alpha1"
kind: "Installation"
metadata:
  name: "istio-installation"
spec:
  jaeger:
    elasticsearch_memory: 1Gi

The custom resource must be called istio-installation, that is, the metadata value for name must be istio-installation.

Custom resource parameters

The following tables list the supported custom resource parameters for Red Hat OpenShift Service Mesh.

Table 1. General parameters
Parameter Values Description Default

deployment_type

origin, openshift

Specifies whether to use Origin (community) or OpenShift (product) default values for unset fields.

origin

Table 2. Istio parameters
Parameter Values Description Default

authentication

true/false

Whether to enable mutual authentication.

false

community

true/false

Whether to modify image names to match community images.

false

prefix

Any valid image repo

Which prefix to apply to Istio image names used in docker pull.

If deployment_type=origin the default is maistra/.

If deployment_type=openshift the default is openshift-istio-tech-preview/.

version

Any valid Docker tag

Docker tag to use with Istio images.

0.1.0

Table 3. Jaeger parameters
Parameter Values Description Default

prefix

Any valid image repo.

Which prefix to apply to Jaeger image names used in docker pull.

If deployment_type=origin the default is jaegertracing/.

If deployment_type=openshift the default is distributed-tracing-tech-preview/.

version

Any valid Docker tag.

Which Docker tag to use with Jaeger images.

If deployment_type=origin then 1.6.

If deployment_type=openshift then 1.6.0.

elasticsearch_memory

Memory size in megabytes or gigabytes.

The amount of memory to allocate to the Elasticsearch installation.

1Gi

Table 4. Launcher parameters
Component Parameter Description Default

openshift

use

Should be modified to reflect the OpenShift user that you want to use to run the Fabric8 launcher.

developer

password

OpenShift user password for running Fabric8 launcher.

developer

github

username

Should be modified to reflect the GitHub account you want to use to run the Fabric8 launcher.

N/A

token

GitHub personal access token you want to use to run the Fabric8 launcher.

N/A

catalog

filter

Filter to apply to the Red Hat booster catalog.

booster.mission.metadata.istio

branch

Version of the Red Hat booster catalog that should be used with Fabric8.

v35

repo

GitHub repo to use for Red Hat booster catalog.

https://github.com/fabric8-launcher/launcher-booster-catalog.git

Creating the operator namespace

  1. Create the istio-operator namespace with the following command:

    $ oc new-project istio-operator

    The custom resource must be deployed into the istio-operator namespace.

  2. Verify the namespace was created:

    $ oc get projects

Preparing the OpenShift Container Platform 3.10 installation

Before you can install the Service Mesh into an OpenShift Container Platform 3.10 installation you must modify the master configuration and each of the schedulable nodes. These changes enable features required within the Service Mesh and also ensure Elasticsearch will function correctly.

Updating the master configuration

The community version of Istio will inject the sidecar by default if you have labeled the namespace. You are not required to label the namespace with Red Hat OpenShift Service Mesh. However, Red Hat OpenShift Service Mesh requires you to opt-in to having the sidecar automatically injected to a deployment. This is to avoid injecting a sidecar where it is not wanted (for example build or deploy pods).

To enable the automatic injection of the Service Mesh sidecar you first need to modify the master configuration on each master to include support for webhooks and signing of Certificate Signing Requests (CSRs).

Make the following changes on each master within your OpenShift Container Platform 3.10 installation:

  1. Change to the directory containing the master configuration file (for example, /etc/origin/master/master-config.yaml).

  2. Create a file named master-config.patch with the following contents:

    admissionConfig:
      pluginConfig:
        MutatingAdmissionWebhook:
          configuration:
            apiVersion: v1
            disable: false
            kind: DefaultAdmissionConfig
        ValidatingAdmissionWebhook:
          configuration:
            apiVersion: v1
            disable: false
            kind: DefaultAdmissionConfig
  3. Within the same directory issue the following commands to apply the patch to the master-config.yaml file:

    $ cp -p master-config.yaml master-config.yaml.prepatch
    $ oc ex config patch master-config.yaml.prepatch -p "$(cat master-config.patch)" > master-config.yaml
    $ master-restart api
    $ master-restart controllers
  4. Then you must modify each individual deployment that you want to monitor as part of your service mesh to enable automatic sidecar injection. Each deployment where you want to enable automatic injection needs to contain the sidecar.istio.io/inject: "true": annotation.

    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      name: ignored
    spec:
      template:
        metadata:
          annotations:
            sidecar.istio.io/inject: "true"
        spec:
          containers:

Updating the node configuration

In order to run the Elasticsearch application you must make a change to the kernel configuration on each node. This change will be handled through the sysctl service.

Make the following changes on each node within your OpenShift Container Platform 3.10 installation:

  1. Create a file named /etc/sysctl.d/99-elasticsearch.conf with the following contents:

    vm.max_map_count = 262144

  2. Execute the following command:

    $ sysctl vm.max_map_count=262144

Installing Service Mesh

Installing Red Hat OpenShift Service Mesh

The Service Mesh installation process introduces a Kubernetes operator to manage the installation of the control plane within the istio-system namespace. This operator defines and monitors a custom resource related to the deployment, update, and deletion of the control plane.

Installing the operator

The following steps will install the Service Mesh operator into an existing OpenShift Container Platform 3.10 installation; they can be executed from any host with access to the cluster. Please ensure you are logged in as a cluster admin before executing the following commands.

You can find the operator templates on GitHub.

$ oc new-project istio-operator
$ oc new-app -f istio_product_operator_template.yaml --param=OPENSHIFT_ISTIO_MASTER_PUBLIC_URL=<master public url>

Verifying operator installation

The above instructions will create a new deployment within the istio-operator project, executing the operator responsible for managing the state of the Red Hat OpenShift Service Mesh control plane through the custom resource.

  1. To verify the operator is installed correctly, execute the following command:

    $ oc get pods -n istio-operator
  2. You can access the logs from the istio-operator pod with the following command, replacing <pod name> with the name of the pod discovered above.

    $ oc logs -n istio-operator <pod name>

    While your exact environment may be different from the example, you should see output that looks similar to the following example:

    time="2018-08-31T17:42:39Z" level=info msg="Go Version: go1.9.4"
    time="2018-08-31T17:42:39Z" level=info msg="Go OS/Arch: linux/amd64"
    time="2018-08-31T17:42:39Z" level=info msg="operator-sdk Version: 0.0.5+git"
    time="2018-08-31T17:42:39Z" level=info msg="Metrics service istio-operator created"
    time="2018-08-31T17:42:39Z" level=info msg="Watching resource istio.openshift.com/v1alpha1, kind Installation, namespace istio-operator, resyncPeriod 0"
    time="2018-08-31T17:42:39Z" level=info msg="Installing istio for Installation istio-installation"

Deploying the control plane

You use the custom resource definition file that you created to deploy the Service Mesh control plane. To deploy the control plan, run the following command:

$ oc create -f cr.yaml -n istio-operator

The operator will create the istio-system namespace and run the installer job; this job will install and configure the control plane using Ansible playbooks. You can follow the progress of the installation by either watching the pods or the log output from the openshift-ansible-istio-installer-job pod.

To watch the progress of the pods execute the following command:

$ oc get pods -n istio-system -w

Post installation tasks

Verifying the installation

Once the openshift-ansible-istio-installer-job has completed run the following command:

$ oc get pods -n istio-system

Verify you have a state similar to the following:

NAME                                          READY     STATUS      RESTARTS   AGE
elasticsearch-0                               1/1       Running     0          2m
grafana-6d5c5477-k7wrh                        1/1       Running     0          2m
istio-citadel-6f9c778bb6-q9tg9                1/1       Running     0          3m
istio-egressgateway-957857444-2g84h           1/1       Running     0          3m
istio-galley-c47f5dffc-dm27s                  1/1       Running     0          3m
istio-ingressgateway-7db86747b7-s2dv9         1/1       Running     0          3m
istio-pilot-5646d7786b-rh54p                  2/2       Running     0          3m
istio-policy-7d694596c6-pfdzt                 2/2       Running     0          3m
istio-sidecar-injector-57466d9bb-4cjrs        1/1       Running     0          3m
istio-statsd-prom-bridge-7f44bb5ddb-6vx7n     1/1       Running     0          3m
istio-telemetry-7cf7b4b77c-p8m2k              2/2       Running     0          3m
jaeger-agent-5mswn                            1/1       Running     0          2m
jaeger-collector-9c9f8bc66-j7kjv              1/1       Running     0          2m
jaeger-query-fdc6dcd74-99pnx                  1/1       Running     0          2m
openshift-ansible-istio-installer-job-f8n9g   0/1       Completed   0          7m
prometheus-84bd4b9796-2vcpc                   1/1       Running     0          3m

If you have installed the Fabric8 launcher you should monitor the containers within the devex project until you have a state similar to the following:

NAME                          READY     STATUS    RESTARTS   AGE
configmapcontroller-1-8rr6w   1/1       Running   0          1m
launcher-backend-2-2wg86      1/1       Running   0          1m
launcher-frontend-2-jxjsd     1/1       Running   0          1m

Tutorials

There are several tutorials to help you learn more about the Service Mesh.

Bookinfo tutorial

The upstream Istio project has an example tutorial called bookinfo, which is composed of four separate microservices used to demonstrate various Istio features. The Bookinfo application displays information about a book, similar to a single catalog entry of an online book store. Displayed on the page is a description of the book, book details (ISBN, number of pages, and so on), and a few book reviews.

The Bookinfo application is broken into four separate microservices:

  • productpage - The productpage microservice calls the details and reviews microservices to populate the page.

  • details - The details microservice contains book information.

  • reviews - The reviews microservice contains book reviews. It also calls the ratings microservice.

  • ratings - The ratings microservice contains book ranking information that accompanies a book review.

There are 3 versions of the reviews microservice:

  • Version v1 doesn’t call the ratings service.

  • Version v2 calls the ratings service, and displays each rating as 1 to 5 black stars.

  • Version v3 calls the ratings service, and displays each rating as 1 to 5 red stars.

Installing the Bookinfo application

The following steps describe deploying and running the Bookinfo tutorial on OpenShift Container Platform 3.10 with Service Mesh 1.0.TechPreview.

Because Red Hat OpenShift Service Mesh implements auto-injection differently than the Istio project, this procedure uses a version of the bookinfo.yaml file that has been annotated to enable automatic injection of the Istio sidecar.

  1. Create a project for the Bookinfo application.

    $ oc new-project myproject
  2. Update the Security Context Constraints (SCC) by adding the service account used by Bookinfo to the anyuid and priveledged SCCs in the "myproject" namespace:

    $ oc adm policy add-scc-to-user anyuid -z default -n myproject
    
    $ oc adm policy add-scc-to-user privileged -z default -n myproject
  3. Deploy the Bookinfo application in the "myproject" namespace by applying the bookinfo.yaml file:

    $ oc apply -n myproject -f https://raw.githubusercontent.com/Maistra/bookinfo/master/bookinfo.yaml
  4. Create the ingress gateway for Bookinfo by applying the bookinfo-gateway.yaml file:

      $ oc apply -n myproject -f https://raw.githubusercontent.com/Maistra/bookinfo/master/bookinfo-gateway.yaml
  5. Set the value for the GATEWAY_URL parameter:

      $ export GATEWAY_URL=$(oc get route -n istio-system istio-ingressgateway -o jsonpath='{.spec.host}')

Verifying the Bookinfo installation

To confirm that the application has been successfully deployed run this command:

  $ curl -o /dev/null -s -w "%{http_code}\n" http://$GATEWAY_URL/productpage

Alternatively, you can open http://$GATEWAY_URL/productpage in your browser.

Add default destination rules

  1. If you did not enable mutual TLS:

      $ curl -o destination-rule-all.yaml https://raw.githubusercontent.com/istio/istio/release-1.0/samples/bookinfo/networking/destination-rule-all.yaml
      $ oc apply -f destination-rule-all.yaml
  2. If you enabled mutual TLS:

      $ curl -o destination-rule-all-mtls.yaml https://raw.githubusercontent.com/istio/istio/release-1.0/samples/bookinfo/networking/destination-rule-all-mtls.yaml
      $ oc apply -f destination-rule-all-mtls.yaml
  3. To list all available destination rules:

      $ oc get destinationrules -o yaml

Removing the Bookinfo application

When you have finished with the Bookinfo application you can remove it by running the cleanup script.

Several of the other tutorials in this document also use the Bookinfo application. Don’t run the cleanup script if you are planning on continuing with the other tutorials.

  1. Download the cleanup script:

      $ curl -o cleanup.sh https://raw.githubusercontent.com/Maistra/bookinfo/master/cleanup.sh && chmod +x ./cleanup.sh
  2. Delete the Bookinfo virtualservice, gateway, and terminate the pods by running the cleanup script:

      $ ./cleanup.sh
      namespace ? [default] myproject
  3. Confirm shutdown by running these commands:

      $ oc get virtualservices -n myproject
      No resources found.
      $ oc get gateway -n myproject
      No resources found.
      $ oc get pods -n myproject
      No resources found.

Distributed tracing tutorial

This tutorial uses Service Mesh and the bookinfo tutorial to demonstrate how you can perform a trace using the Jaeger component of Red Hat OpenShift Service Mesh.

Prerequisites:

  • OpenShift Container Catalog 3.10 or higher installed.

  • Red Hat OpenShift Service Mesh 1.0.TechPreview installed.

  • Bookinfo demonstration application installed.

Generating traces and analyzing trace data

  1. Once you’ve deployed the Bookinfo application, generate some activity by accessing http://$GATEWAY_URL/productpage and refreshing the page a few times.

  2. A route to access the Jaeger dashboard should already exist. Query OpenShift for details of the route:

      $ export JAEGER_URL=$(oc get route -n istio-system jaeger-query -o jsonpath='{.spec.host}')
  3. Launch a browser and navigate to navigate to https://${JAEGER_URL}.

  4. In the left-hand pane of the Jaeger dashboard, from the Service menu, select "productpage" and click the Find Traces button at the bottom of the pane. You should see a list of available traces similar to the image below:

    jaeger main screen
  5. Click one of the traces in the list to open a detailed view of that trace. If you click on the top (most recent) trace, you should see the details corresponding to your latest refresh of the `/productpage.

    jaeger spans

    The trace in the image above consists of a few nested spans, each corresponding to a Bookinfo service call, all performed in response to a /productpage request. Overall processing time was 2.62s, with the "details" service taking 3.56ms, the "reviews" service taking 2.6s, and the "ratings" service taking 5.32ms. Each of the calls to remote services is represented by a client-side and server-side spans. For example, the "details" client-side span is labeled productpage details.myproject.svc.cluster.local:9080. The span nested underneath it, labeled details details.myproject.svc.cluster.local:9080, corresponds to the server-side processing of the request. The trace also shows calls to "istio-policy" which reflect authorization checks made by istio.

Removing the tracing tutorial

The procedure for removing the Tracing tutorial is the same as removing the Bookinfo tutorial.

Grafana tutorial

Building on the bookinfo tutorial, shows you how to setup and use the Istio Dashboard to monitor mesh traffic. As part of this task, you will install the Grafana Istio add-on and use the web-based interface for viewing service mesh traffic data.

Prerequisites:

  • OpenShift Container Catalog 3.10 or higher installed.

  • Red Hat OpenShift Service Mesh 1.0.TechPreview installed.

  • Bookinfo demonstration application installed.

Accessing the Grafana dashboard

  1. A route to access the Grafana dashboard should already exist. Query OpenShift for details of the route:

      $ export GRAFANA_URL=$(oc get route -n istio-system grafana -o jsonpath='{.spec.host}')
  2. Launch a browser and navigate to navigate to http://${GRAFANA_URL}. You should see Grafana’s home screen, similar to the image below:

    grafana home screen
  3. From the menu in the top left corner, select "Istio Mesh Dashboard" to see Istio mesh metrics.

    grafana mesh no traffic
  4. Generate some traffic by accessing Bookinfo application:

      $ curl -o /dev/null http://$GATEWAY_URL/productpage

    The dashboard should reflect the traffic through the mesh, similar to the following image:

    grafana mesh with traffic
  5. To see detailed metrics for a service click on a service name in the "Service" column. The service dashboard will look similar to the following image: