Table of Contents

Product overview

Red Hat OpenShift Service Mesh overview

This release of Red Hat OpenShift Service Mesh is a Technology Preview release only. Technology Preview releases are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete, and Red Hat does NOT recommend using them for production. Using Red Hat OpenShift Service Mesh on a cluster renders the whole OpenShift cluster as a technology preview, that is, in an unsupported state. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information see Red Hat Technology Preview Features Support Scope.

Red Hat OpenShift Service Mesh is a platform that provides behavioral insight and operational control over the service mesh, providing a uniform way to connect, secure, and monitor microservice applications.

The term service mesh describes the network of microservices that make up applications in a distributed microservice architecture and the interactions between those microservices. As a service mesh grows in size and complexity, it can become harder to understand and manage.

Based on the open source Istio project, Red Hat OpenShift Service Mesh adds a transparent layer on existing distributed applications without requiring any changes to the service code. You add Red Hat OpenShift Service Mesh support to services by deploying a special sidecar proxy throughout your environment that intercepts all network communication between microservices. You configure and manage the service mesh using the control plane features.

Red Hat OpenShift Service Mesh provides an easy way to create a network of deployed services that provides discovery, load balancing, service-to-service authentication, failure recovery, metrics, and monitoring. A service mesh also provides more complex operational functionality, including A/B testing, canary releases, rate limiting, access control, and end-to-end authentication.

Red Hat OpenShift Service Mesh product architecture

Red Hat OpenShift Service Mesh is logically split into a data plane and a control plane:

  • The data plane is composed of a set of intelligent proxies deployed as sidecars. These proxies intercept and control all inbound and outbound network communication between microservices in the service mesh; sidecar proxies also communicate with Mixer, the general-purpose policy and telemetry hub.

  • The control plane is responsible for managing and configuring proxies to route traffic, and configuring Mixers to enforce policies and collect telemetry.

The components that make up the data plane and the control plane are:

  • Envoy proxy is the data plane component that intercepts all inbound and outbound traffic for all services in the service mesh. Envoy is deployed as a sidecar to the relevant service in the same pod.

  • Mixer is the control plane component responsible for enforcing access control and usage policies (such as authorization, rate limits, quotas, authentication, request tracing) and collecting telemetry data from the Envoy proxy and other services.

  • Pilot is the control plane component responsible for configuring the proxies at runtime. Pilot provides service discovery for the Envoy sidecars, traffic management capabilities for intelligent routing (for example, A/B tests or canary deployments), and resiliency (timeouts, retries, and circuit breakers).

  • Citadel is the control plane component responsible for certificate issuance and rotation. Citadel provides strong service-to-service and end-user authentication with built-in identity and credential management. You can use Citadel to upgrade unencrypted traffic in the service mesh. Using Citadel, operators can enforce policies based on service identity rather than on network controls.

Supported configurations

The following are the only supported configurations for the Red Hat OpenShift Service Mesh 0.9.TechPreview:

  • Red Hat OpenShift Container Platform version 3.11.

  • Red Hat OpenShift Container Platform version 4.0 Beta.

OpenShift Online and OpenShift Dedicated are not supported for Red Hat OpenShift Service Mesh 0.9.TechPreview.

  • The deployment must be contained to a single OpenShift Container Platform cluster (no federation).

  • This release of Red Hat OpenShift Service Mesh is only available on OpenShift Container Platform x86_64.

  • Red Hat OpenShift Service Mesh is only suited for OpenShift Container Platform Software Defined Networking (SDN) configured as a flat network with no external providers.

  • This release supports only configurations where all service mesh components are contained in the OpenShift cluster in which it operates. It does not support management of microservices that reside outside of the cluster, or in a multi-cluster scenario.

  • The Kiali observability console is only supported on the two most recent releases of the Chrome, Edge, Firefox, or Safari browsers.

For more information about support for this technology preview, see this Red Hat Knowledge Base article.

Comparing Red Hat OpenShift Service Mesh and upstream Istio community installations

An installation of Red Hat OpenShift Service Mesh differs from upstream Istio community installations in multiple ways. The modifications to Red Hat OpenShift Service Mesh are sometimes necessary to resolve issues, provide additional features, or to handle differences when deploying on OpenShift.

The current release of Red Hat OpenShift Service Mesh differs from the current upstream Istio community release in the following ways:

Automatic injection

The upstream Istio community installation automatically injects the sidecar to namespaces you have labeled.

Red Hat OpenShift Service Mesh does not automatically inject the sidecar to any namespaces, but requires you to specify the sidecar.istio.io/inject annotation as illustrated in the Automatic sidecar injection section.

Role Based Access Control features

Role Based Access Control (RBAC) provides a mechanism you can use to control access to a service. You can identify subjects by username or by specifying a set of properties and apply access controls accordingly.

The upstream Istio community installation includes options to perform exact header matches, match wildcards in headers, or check for a header containing a specific prefix or suffix.

Upstream Istio community matching request headers example
apiVersion: "rbac.istio.io/v1alpha1"
kind: ServiceRoleBinding
metadata:
  name: httpbin-client-binding
  namespace: httpbin
spec:
  subjects:
  - user: "cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account"
    properties:
      request.headers[<header>]: "value"

Red Hat OpenShift Service Mesh extends the ability to match request headers by using a regular expression. Specify a property key of request.regex.headers with a regular expression.

Red Hat OpenShift Service Mesh matching request headers by using regular expressions
apiVersion: "rbac.istio.io/v1alpha1"
kind: ServiceRoleBinding
metadata:
  name: httpbin-client-binding
  namespace: httpbin
spec:
  subjects:
  - user: "cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account"
    properties:
      request.regex.headers[<header>]: "<regular expression>"

Red Hat OpenShift Service Mesh installation overview

The Red Hat OpenShift Service Mesh installation process creates two different projects (namespaces):

  • istio-operator project (1 pod)

  • istio-system project (17 pods)

You first create a Kubernetes operator. This operator defines and monitors a custom resource that manages the deployment, updating, and deletion of the Service Mesh components.

Depending on how you define the custom resource file, you can install one or more of the following components when you install the Service Mesh:

  • Istio - based on the open source Istio project, lets you connect, secure, control, and observe the microservices that make up your applications.

  • Jaeger - based on the opensource Jaeger project, lets you perform tracing to monitor and troubleshoot transactions in complex distributed systems.

  • Kiali - based on the opensource Kiali project, Kiali provides observability for your service mesh. Using Kiali lets you view configurations, monitor traffic, and view and analyze traces in a single console.

  • Launcher - based on the opensource fabric8 community, this integrated development platform helps you build cloud native applications and microservices. Red Hat OpenShift Service Mesh includes several boosters that let you explore features of the Service Mesh.

During the installation the operator creates an Ansible job that runs an Ansible playbook that performs the following installation and configuration tasks automatically:

  • Creates the istio-system namespace

  • Creates the openshift-ansible-istio-installer-job which installs the following components:

    • Istio components:

      • istio-citadel

      • istio-egressgateway

      • istio-galley

      • istio-ingressgateway

      • istio-pilot

      • istio-policy

      • istio-sidecar-injector

      • istio-telemetry

    • Elasticsearch

    • Grafana

    • Jaeger components:

      • jaeger-agent

      • jaeger-collector

      • jaeger-query

    • Kiali components (if configured in the custom resource definition):

      • Kiali

    • Prometheus

    • 3scale Components (if configured in the custom resource definition):

      • 3scale-istio-adapter

  • Performs the following launcher configuration tasks (if configured in the custom resource definition):

    • Creates a devex project and installs the Fabric8 launcher into that project.

    • Adds the cluster admin role to the OpenShift Container Platform user specified in the launcher parameters in the custom resource file.

Prerequisites

Red Hat OpenShift Service Mesh installation prerequisites

Before you can install Red Hat OpenShift Service Mesh, you must meet the following prerequisites:

  • Possess an active OpenShift Container Platform subscription on your Red Hat account. If you do not have a subscription, contact your sales representative for more information.

  • Install OpenShift Container Platform version 3.11, or higher. For more information about the system and environment requirements, see the OpenShift Container Platform documentation.

  • Install the version of the OpenShift Container Platform command line utility (the oc client tool) that matches your OpenShift Container Platform version and add it to your path. For example, if you have OpenShift Container Platform 3.11 you must have the matching oc client version 3.11. For installation instructions, see the OpenShift Container Platform Command Line Reference document.

Preparing the OpenShift Container Platform installation

Before you can install the Service Mesh into an OpenShift Container Platform installation, you must modify the master configuration and each of the schedulable nodes. These changes enable the features that are required in the Service Mesh and also ensure that Elasticsearch features function correctly.

Updating the node configuration

To run the Elasticsearch application, you must make a change to the kernel configuration on each node. This change is handled through the sysctl service.

Make the following changes on each node within your OpenShift Container Platform installation:

  1. Create a file named /etc/sysctl.d/99-elasticsearch.conf with the following contents:

    vm.max_map_count = 262144
  2. Execute the following command:

    $ sysctl vm.max_map_count=262144

Installing Service Mesh

Installing the Red Hat OpenShift Service Mesh

Installing the Service Mesh involves creating a custom resource definition file, then installing the operator to create and manage the custom resource.

Starting with Red Hat OpenShift Service Mesh 0.9.TechPreview, Mixer’s policy enforcement is disabled by default, but you must enable it to run policy tasks. See Update Mixer policy enforcement for instructions on enabling Mixer policy enforcement.

Creating a custom resource file

To deploy the Service Mesh control plane, you must deploy a custom resource. A custom resource is an object that extends the Kubernetes API, or allows you to introduce your own API into a project or a cluster. You define a custom resource as a yaml file that defines the object. Then you use the yaml file to create the object. The following example contains all of the supported parameters and deploys Red Hat OpenShift Service Mesh 0.9.TechPreview images based on Red Hat Enterprise Linux (RHEL).

Deploying an istio-installation.yaml file that includes all of the parameters ensures that you have installed all of the Istio components that are required to complete the tutorials included in this document.

The 3scale Istio Adapter is deployed and configured in the custom resource file. It also requires a working 3scale account (SaaS or On-Premises).

Full example istio-installation.yaml
apiVersion: "istio.openshift.com/v1alpha1"
kind: "Installation"
metadata:
  name: "istio-installation"
spec:
  deployment_type: openshift
  istio:
    authentication: true
    community: false
    prefix: openshift-istio-tech-preview/
    version: 0.9.0
  jaeger:
    prefix: distributed-tracing-tech-preview/
    version: 1.11.0
    elasticsearch_memory: 1Gi
  kiali:
    username: username
    password: password
    prefix: openshift-istio-tech-preview/
    version: 0.15.0
  launcher:
    openshift:
      user: user
      password: password
    github:
      username: username
      token: token
    catalog:
      filter: booster.mission.metadata.istio
      branch: v85
      repo: https://github.com/fabric8-launcher/launcher-booster-catalog.git
  threeScale:
    enabled: false
    prefix: openshift-istio-tech-preview/
    version: 0.4.1
    adapter:
      listenAddr: 3333
      logLevel: info
      logJSON: true
      reportMetrics: true
      metricsPort: 8080
      cacheTTLSeconds: 300
      cacheRefreshSeconds: 180
      cacheEntriesMax: 1000
      cacheRefreshRetries: 1
      allowInsecureConn: false
      clientTimeoutSeconds: 10

The following example illustrates the minimum required to install the control plane. This minimal example custom resource deploys the CentOS-based community Istio images.

Minimum example istio-installation.yaml
apiVersion: "istio.openshift.com/v1alpha1"
kind: "Installation"
metadata:
  name: "istio-installation"

Custom resource parameters

The following tables list the supported custom resource parameters for Red Hat OpenShift Service Mesh.

Table 1. General parameters
Parameter Values Description Default

deployment_type

origin, openshift

Specifies whether to use Origin (community) or OpenShift Container Platform (product) default values for undefined parameter values.

origin

Table 2. Istio parameters
Parameter Values Description Default

authentication

true/false

Whether to enable mutual authentication.

false

community

true/false

Whether to modify image names to match community images.

false

prefix

Any valid image repo

Which prefix to apply to Istio image names that are used in podman pull or docker pull commands.

If deployment_type=origin the default value is maistra/.

If deployment_type=openshift the default value is openshift-istio-tech-preview/.

version

Any valid container image tag

Container image tag to use with Istio images.

0.9.0

Table 3. Jaeger parameters
Parameter Values Description Default

prefix

Any valid image repo.

Which prefix to apply to Jaeger image names used in podman pull or docker pull.

If deployment_type=origin the default value is jaegertracing/.

If deployment_type=openshift the default value is distributed-tracing-tech-preview/.

version

Any valid container image tag.

Which container image tag to use with Jaeger images.

The default value is 1.11 if deployment_type=origin.

The default value is 1.11.0 if deployment_type=openshift.

elasticsearch_memory

Memory size in megabytes or gigabytes.

The amount of memory to allocate to the Elasticsearch installation, for example, 1000MB or 1 GB.

1Gi

Table 4. Kiali parameters
Parameter Values Description Default

username

valid user

The user name to use to access the Kiali console. Note that this is not related to any account on OpenShift Container Platform.

N/A

password

valid password

The password to use to access the Kiali console. Note that this is not related to any account on OpenShift Container Platform.

N/A

prefix

valid image repository

Which prefix to apply to the Kiali image names used in podman pull or docker pull commands.

If deployment_type=origin the default value is kiali/.

If deployment_type=openshift the default value is openshift-istio-tech-preview/.

version

valid Kiali tag

Which container image tag to use with Kiali images.

The default value is v0.15.0 if deployment_type=origin.

The default value is 0.15.0 if deployment_type=openshift.

Table 5. Launcher parameters
Component Parameter Description Default

openshift

user

The OpenShift Container Platform user that you want to run the Fabric8 launcher.

developer

password

The OpenShift Container Platform user password to run the Fabric8 launcher.

developer

github

username

Should be modified to reflect the GitHub account you want to use to run the Fabric8 launcher.

N/A

token

GitHub personal access token you want to use to run the Fabric8 launcher.

N/A

catalog

filter

Filter to apply to the Red Hat booster catalog.

booster.mission.metadata.istio

branch

Version of the Red Hat booster catalog that should be used with Fabric8.

v85

repo

GitHub repository to use for Red Hat booster catalog.

https://github.com/fabric8-launcher/launcher-booster-catalog.git

Table 6. 3scale parameters
Parameter Description Values Default

enabled

Whether to install the 3scale adapter

true/false

false

prefix

a prefix to apply to the 3scale adapter image name used in docker pull.

valid image repo

quay.io/3scale/ if deployment_type is origin and openshift-istio-tech-preview/ if deployment_type is openshift

version

docker tag to use with the 3scale adapter image

valid docker tag

0.4.1

Table 7. 3scale Adapter parameters
Parameter Description Default

listenAddr

Sets the listen address for the gRPC server

0

logLevel

Sets the minimum log output level. Accepted values are one of debug,info,warn,error,none

info

logJSON

Controls whether the log is formatted as JSON

true

reportMetrics

Controls whether 3scale system and backend metrics are collected and reported to Prometheus

true

metricsPort

Sets the port which 3scale /metrics endpoint can be scrapped from

8080

cacheTTLSeconds

Time period, in seconds, to wait before purging expired items from the cache

300

cacheRefreshSeconds

Time period before expiry when cache elements are attempted to be refreshed

180

cacheEntriesMax

Max number of items that can be stored in the cache at any time. Set to 0 to disable caching

1000

cacheRefreshRetries

Time period before expiry when cache elements are attempted to be refreshed

1

AllowInsecureConn

Allow to skip certificate verification when calling 3scale APIs. Enabling this is not recommended

false

clientTimeoutSeconds

Sets the number of seconds to wait before terminating requests to 3scale System and Backend

10

Installing the operator

The Service Mesh installation process introduces a Kubernetes operator to manage the installation of the control plane within the istio-system namespace. This operator defines and monitors a custom resource related to the deployment, update, and deletion of the control plane.

You can find the operator templates on GitHub.

You must name the custom resource istio-installation, that is, the metadata value for name must be istio-installation and you must install it into the istio-operator namespace that is created by the operator.

The following commands install the Service Mesh operator into an existing OpenShift Container Platform installation; you can run them from any host with access to the cluster. Ensure that you are logged in as a cluster admin before executing these commands.

$ oc new-project istio-operator
$ oc new-app -f istio_product_operator_template.yaml --param=OPENSHIFT_ISTIO_MASTER_PUBLIC_URL=<master public url>

The OpenShift Master Public URL must be configured to match the public URL of your OpenShift Container Platform Console, this parameter is required by the Fabric8 Launcher.

Update Mixer policy enforcement

In previous versions of Red Hat OpenShift Service Mesh, Mixer’s policy enforcement was enabled by default. However, starting with Red Hat OpenShift Service Mesh 0.9.TechPreview Mixer policy enforcement is now disabled by default. You must enable it before running policy tasks.

To check the current Mixer policy enforcement status, run the following command:

$ oc get cm -n istio-system istio -o jsonpath='{.data.mesh}' | grep disablePolicyChecks

If disablePolicyChecks: true, follow these steps to enable policy enforcement in Mixer:

  1. Edit the Service Mesh ConfigMap:

    $ oc edit cm -n istio-system istio
  2. Locate disablePolicyChecks: true within the ConfigMap and change the value to false.

  3. Save the configuration and exit the editor.

  4. Re-check the Mixer policy enforcement status to ensure it is set to false.

Verifying operator installation

The previous commands create a new deployment within the istio-operator project and run the operator responsible for managing the state of the Red Hat OpenShift Service Mesh control plane through the custom resource.

To verify that the operator is installed correctly, access the logs from the operator pod by running the following command:

$ oc logs -n istio-operator $(oc -n istio-operator get pods -l name=istio-operator --output=jsonpath={.items..metadata.name})

Your exact environment may be different from the example, you should see output that looks similar to the following example:

time="2018-08-31T17:42:39Z" level=info msg="Go Version: go1.9.4"
time="2018-08-31T17:42:39Z" level=info msg="Go OS/Arch: linux/amd64"
time="2018-08-31T17:42:39Z" level=info msg="operator-sdk Version: 0.0.5+git"
time="2018-08-31T17:42:39Z" level=info msg="Metrics service istio-operator created"
time="2018-08-31T17:42:39Z" level=info msg="Watching resource istio.openshift.com/v1alpha1, kind Installation, namespace istio-operator, resyncPeriod 0"

Deploying the control plane

You use the custom resource definition file that you created to deploy the Service Mesh control plane. To deploy the control plane, run the following command:

$ oc create -f istio-installation.yaml -n istio-operator

The operator creates the istio-system namespace and runs the installer job; this job installs and configures the control plane using Ansible playbooks. You can follow the progress of the installation by either watching the pods or the log output from the openshift-ansible-istio-installer-job pod.

To watch the progress of the pods, run the following command:

$ oc get pods -n istio-system -w

Post installation tasks

Verifying the installation

After the openshift-ansible-istio-installer-job has completed, run the following command:

$ oc get pods -n istio-system

Verify that you have a state similar to the following:

NAME                                          READY     STATUS      RESTARTS   AGE
3scale-istio-adapter-7df4db48cf-sc98s         1/1       Running     0          13s
elasticsearch-0                               1/1       Running     0          29s
grafana-c7f5cc6b6-vg6db                       1/1       Running     0          33s
istio-citadel-d6d6bb7bb-jgfwt                 1/1       Running     0          1m
istio-egressgateway-69448cf7dc-b2qj5          1/1       Running     0          1m
istio-galley-f49696978-q949d                  1/1       Running     0          1m
istio-ingressgateway-7759647fb6-pfpd5         1/1       Running     0          1m
istio-pilot-7595bfd696-plffk                  2/2       Running     0          1m
istio-policy-779454b878-xg7nq                 2/2       Running     2          1m
istio-sidecar-injector-6655b6ffdb-rn69r       1/1       Running     0          1m
istio-telemetry-dd9595888-8xjz2               2/2       Running     2          1m
jaeger-agent-gmk72                            1/1       Running     0          25s
jaeger-collector-7f644df9f5-dbzcv             1/1       Running     1          25s
jaeger-query-6f47bf4777-h4wmh                 1/1       Running     1          25s
kiali-7cc48b6cbb-74gcf                        1/1       Running     0          17s
openshift-ansible-istio-installer-job-fbtfj   0/1       Completed   0          2m
prometheus-5f9fd67f8-r6b86                    1/1       Running     0          1m

If you also installed the Fabric8 launcher, monitor the containers within the devex project until the following state is reached:

NAME                          READY     STATUS    RESTARTS   AGE
configmapcontroller-1-8rr6w   1/1       Running   0          1m
launcher-backend-2-2wg86      1/1       Running   0          1m
launcher-frontend-2-jxjsd     1/1       Running   0          1m

Application requirements

Requirements for deploying applications on Red Hat OpenShift Service Mesh

When deploying an application into the Service Mesh there are several differences between the behavior of the upstream community version of Istio and the behavior within a Red Hat OpenShift Service Mesh installation.

Configuring security constraints for application service accounts

When deploying an application into a Service Mesh running in an OpenShift environment, it is currently necessary to relax the security constraints placed on the application by its service account to ensure the application can function correctly. Each service account must be granted permissions with the anyuid and privileged Security Context Constraints (SCC) to enable the sidecars to run correctly.

The privileged SCC is required to ensure changes to the pod’s networking configuration is updated successfully with the istio-init initialization container and the anyuid SCC is required to enable the sidecar container to run with its required user id of 1337.

To configure the correct permissions it is necessary to identify the service accounts being used by your application’s pods. For most applications, this will be the default service account, however your Deployment/DeploymentConfig may override this within the pod specification by providing the serviceAccountName.

For each identified service account you must update the cluster configuration to ensure they are granted access to the anyuid and privileged SCCs by executing the following commands from an account with cluster admin privileges. Replace <service account> and <namespace> with values specific to your application.

$ oc adm policy add-scc-to-user anyuid -z <service account> -n <namespace>
$ oc adm policy add-scc-to-user privileged -z <service account> -n <namespace>

The relaxing of security constraints is only necessary during the Red Hat OpenShift Service Mesh Technology Preview.

Updating the master configuration

Master configuration updates are not necessary if you are running OpenShift Container Platform 4.0.

Service Mesh relies on the existence of a proxy sidecar within the application’s pod to provide service mesh capabilities to the application. You can enable automatic sidecar injection or manage it manually. We recommend automatic injection by using the annotation, with no need to label namespaces, to ensure your application contains the appropriate configuration for your service mesh upon deployment. This method requires fewer privileges and does not conflict with other OpenShift capabilities such as builder pods.

The upstream version of Istio injects the sidecar by default if you have labeled the namespace. You are not required to label the namespace with Red Hat OpenShift Service Mesh. However, Red Hat OpenShift Service Mesh requires you to opt in to having the sidecar automatically injected to a deployment. This avoids injecting a sidecar where it is not wanted (for example, build or deploy pods). The webhook checks the configuration of pods deploying into all namespaces to see if they are opting in to injection with the appropriate annotation.

To enable the automatic injection of the Service Mesh sidecar you must first modify the master configuration on each master to include support for webhooks and signing of Certificate Signing Requests (CSRs).

Make the following changes on each master within your OpenShift Container Platform installation:

  1. Change to the directory containing the master configuration file (for example, /etc/origin/master/master-config.yaml).

  2. Create a file named master-config.patch with the following contents:

    admissionConfig:
      pluginConfig:
        MutatingAdmissionWebhook:
          configuration:
            apiVersion: apiserver.config.k8s.io/v1alpha1
            kubeConfigFile: /dev/null
            kind: WebhookAdmission
        ValidatingAdmissionWebhook:
          configuration:
            apiVersion: apiserver.config.k8s.io/v1alpha1
            kubeConfigFile: /dev/null
            kind: WebhookAdmission
  3. In the same directory, issue the following commands to apply the patch to the master-config.yaml file:

    $ cp -p master-config.yaml master-config.yaml.prepatch
    $ oc ex config patch master-config.yaml.prepatch -p "$(cat master-config.patch)" > master-config.yaml
    $ /usr/local/bin/master-restart api && /usr/local/bin/master-restart controllers
Automatic sidecar injection

When deploying an application into the Red Hat OpenShift Service Mesh you must opt in to injection by specifying the sidecar.istio.io/inject annotation with a value of true. The decision to opt in is required to ensure the sidecar injection does not interfere with other OpenShift features such as builder pods used by numerous frameworks within the OpenShift ecosystem.

This example shows the annotation used within the sleep test application. The additional sidecar containers are included when this configuration is deployed within an Red Hat OpenShift Service Mesh installation.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: sleep
spec:
  replicas: 1
  template:
    metadata:
      annotations:
        sidecar.istio.io/inject: "true"
      labels:
        app: sleep
    spec:
      containers:
      - name: sleep
        image: tutum/curl
        command: ["/bin/sleep","infinity"]
        imagePullPolicy: IfNotPresent
Manual sidecar injection

When you use manual sidecar injection, ensure you have access to a running cluster so the correct configuration can be obtained from the istio-sidecar-injector configmap within the istio-system namespace.

Manual injection of the sidecar is supported by using the upstream istioctl command. To obtain the executable and deploy an application with manual injection:

  • Download the appropriate installation for your OS

  • Unpack the installation into a directory and include the bin directory in your PATH

After installation, you can inject the sidecar into your application by executing the following command:

$ istioctl kube-inject -f app.yaml | oc create -f -

This command injects the containers into the application’s yaml configuration and pipes the modified configuration to the oc command to create the deployments.

Tutorials

There are several tutorials to help you learn more about the Service Mesh.

Bookinfo tutorial

The upstream Istio project has an example tutorial called bookinfo, which is composed of four separate microservices used to demonstrate various Istio features. The Bookinfo application displays information about a book, similar to a single catalog entry of an online book store. Displayed on the page is a description of the book, book details (ISBN, number of pages and other information), and book reviews.

The Bookinfo application consists of four separate microservices:

  • The productpage microservice calls the details and reviews microservices to populate the page.

  • The details microservice contains book information.

  • The reviews microservice contains book reviews. It also calls the ratings microservice.

  • The ratings microservice contains book ranking information that accompanies a book review.

There are three versions of the reviews microservice:

  • Version v1 does not call the ratings service.

  • Version v2 calls the ratings service and displays each rating as one to five black stars.

  • Version v3 calls the ratings service and displays each rating as one to five red stars.

Installing the Bookinfo application

The following steps describe deploying and running the Bookinfo tutorial on OpenShift Container Platform with Service Mesh 0.9.TechPreview.

Prerequisites:

  • OpenShift Container Platform 3.11 or higher installed.

  • Red Hat OpenShift Service Mesh 0.9.TechPreview installed.

Red Hat OpenShift Service Mesh implements auto-injection differently than the upstream Istio project, therefore this procedure uses a version of the bookinfo.yaml file annotated to enable automatic injection of the Istio sidecar.

  1. Create a project for the Bookinfo application.

    $ oc new-project myproject
  2. Update the Security Context Constraints (SCC) by adding the service account used by Bookinfo to the anyuid and privileged SCCs in the "myproject" namespace:

    $ oc adm policy add-scc-to-user anyuid -z default -n myproject
    $ oc adm policy add-scc-to-user privileged -z default -n myproject
  3. Deploy the Bookinfo application in the "myproject" namespace by applying the bookinfo.yaml file:

    $ oc apply -n myproject -f https://raw.githubusercontent.com/Maistra/bookinfo/master/bookinfo.yaml
  4. Create the ingress gateway for Bookinfo by applying the bookinfo-gateway.yaml file:

      $ oc apply -n myproject -f https://raw.githubusercontent.com/Maistra/bookinfo/master/bookinfo-gateway.yaml
  5. Set the value for the GATEWAY_URL parameter:

    $ export GATEWAY_URL=$(oc get route -n istio-system istio-ingressgateway -o jsonpath='{.spec.host}')

Verifying the Bookinfo installation

To confirm that the application is successfully deployed, run this command:

$ curl -o /dev/null -s -w "%{http_code}\n" http://$GATEWAY_URL/productpage

Alternatively, you can open http://$GATEWAY_URL/productpage in your browser.

Add default destination rules

  1. If you did not enable mutual TLS:

    $ curl -o destination-rule-all.yaml https://raw.githubusercontent.com/istio/istio/release-1.0/samples/bookinfo/networking/destination-rule-all.yaml
    $ oc apply -f destination-rule-all.yaml
  2. If you enabled mutual TLS:

    $ curl -o destination-rule-all-mtls.yaml https://raw.githubusercontent.com/istio/istio/release-1.0/samples/bookinfo/networking/destination-rule-all-mtls.yaml
    $ oc apply -f destination-rule-all-mtls.yaml
  3. To list all available destination rules:

    $ oc get destinationrules -o yaml

Removing the Bookinfo application

When you finish with the Bookinfo application, you can remove it by running the cleanup script.

Several of the other tutorials in this document also use the Bookinfo application. Do not run the cleanup script if you plan to continue with the other tutorials.

  1. Download the cleanup script:

    $ curl -o cleanup.sh https://raw.githubusercontent.com/Maistra/bookinfo/master/cleanup.sh && chmod +x ./cleanup.sh
  2. Delete the Bookinfo virtualservice, gateway, and terminate the pods by running the cleanup script:

    $ ./cleanup.sh
    namespace ? [default] myproject
  3. Confirm shutdown by running these commands:

    $ oc get virtualservices -n myproject
    No resources found.
    $ oc get gateway -n myproject
    No resources found.
    $ oc get pods -n myproject
    No resources found.

Distributed tracing tutorial

Jaeger is an open source distributed tracing system. You use Jaeger for monitoring and troubleshooting microservices-based distributed systems. Using Jaeger you can perform a trace, which follows the path of a request through various microservices that make up an application. Jaeger is installed by default as part of the Service Mesh.

This tutorial uses Service Mesh and the bookinfo tutorial to demonstrate how you can perform a trace using the Jaeger component of Red Hat OpenShift Service Mesh.

Prerequisites:

  • OpenShift Container Platform 3.11 or higher installed.

  • Red Hat OpenShift Service Mesh 0.9.TechPreview installed.

  • Bookinfo demonstration application installed.

Generating traces and analyzing trace data

  1. After you have deployed the Bookinfo application, generate some activity by accessing http://$GATEWAY_URL/productpage and refreshing the page a few times.

  2. A route to access the Jaeger dashboard already exists. Query for details of the route:

    $ export JAEGER_URL=$(oc get route -n istio-system jaeger-query -o jsonpath='{.spec.host}')
  3. Launch a browser and navigate to https://${JAEGER_URL}.

  4. In the left pane of the Jaeger dashboard, from the Service menu, select "productpage" and click the Find Traces button at the bottom of the pane. A list of traces is displayed, as shown in the following image:

    jaeger main screen
  5. Click one of the traces in the list to open a detailed view of that trace. If you click on the top (most recent) trace, you see the details that correspond to the latest refresh of the `/productpage.

    jaeger spans

    The trace in the previous figure consists of a few nested spans, each corresponding to a Bookinfo service call, all performed in response to a `/productpage request. Overall processing time was 2.62s, with the details service taking 3.56ms, the reviews service taking 2.6s, and the ratings service taking 5.32ms. Each of the calls to remote services is represented by a client-side and server-side span. For example, the details client-side span is labeled productpage details.myproject.svc.cluster.local:9080. The span nested underneath it, labeled details details.myproject.svc.cluster.local:9080, corresponds to the server-side processing of the request. The trace also shows calls to istio-policy, which reflect authorization checks made by Istio.

Removing the tracing tutorial

Follow the procedure for removing the Bookinfo tutorial to remove the Tracing tutorial.

Prometheus tutorial

Prometheus is an open source system and service monitoring toolkit. Prometheus collects metrics from configured targets at specified intervals, evaluates rule expressions, displays the results, and can trigger alerts if some condition is observed to be true. Grafana or other API consumers can be used to visualize the collected data.

This tutorial uses Service Mesh and the bookinfo tutorial to demonstrate how you can query for metrics using Prometheus.

Prerequisites:

  • OpenShift Container Platform 3.11 or higher installed.

  • Red Hat OpenShift Service Mesh 0.9.TechPreview installed.

  • Bookinfo demonstration application installed.

Querying metrics

  1. Verify that the prometheus service is running in your cluster. In Kubernetes environments, execute the following command:

    $ oc get svc prometheus -n istio-system

    You will see something like the following:

    NAME         CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
    prometheus   10.59.241.54   <none>        9090/TCP   2m
  2. Generate network traffic by accessing the Bookinfo application:

    $ curl -o /dev/null http://$GATEWAY_URL/productpage
  3. A route to access the Prometheus user interface already exists. Query for details of the route:

    $ export PROMETHEUS_URL=$(oc get route -n istio-system prometheus -o jsonpath='{.spec.host}')
  4. Launch a browser and navigate to http://${PROMETHEUS_URL}. You will see the Prometheus home screen, similar to the following figure:

    prometheus home screen
  5. In the Expression field, enter istio_request_duration_seconds_count, and click the Execute button. You will see a screen similar to the following figure:

    prometheus metrics
  6. You can narrow down queries by using selectors. For example istio_request_duration_seconds_count{destination_workload="reviews-v2"} shows only counters with the matching destination_workload label. For more information about using queries, see the Prometheus documentation.

  7. To list all available Prometheus metrics, run the following command:

    $ oc get prometheus -n istio-system -o jsonpath='{.items[*].spec.metrics[*].name}' requests_total request_duration_seconds request_bytes response_bytes tcp_sent_bytes_total tcp_received_bytes_total

Note that returned metric names must be prepended with istio_ when used in queries, for example, requests_total is istio_requests_total.

Removing the Prometheus tutorial

Follow the procedure for removing the Bookinfo tutorial to remove the Prometheus tutorial.

Kiali tutorial

Kiali works with Istio to visualize your service mesh topology to provide visibility into features like circuit breakers, request rates, and more. Kiali offers insights about the mesh components at different levels, from abstract Applications to Services and Workloads. Kiali provides an interactive graph view of your namespace in real time. It can display the interactions at several levels (applications, versions, workloads) with contextual information and charts on the selected graph node or edge.

This tutorial uses Service Mesh and the bookinfo tutorial to demonstrate how you can use the Kiali console to view the topography and health of your service mesh.

Prerequisites:

  • OpenShift Container Platform 3.11 or higher installed.

  • Red Hat OpenShift Service Mesh 0.9.TechPreview installed.

  • Kiali parameters specified in the custom resource file.

  • Bookinfo demonstration application installed.

Accessing the Kiali console

  1. A route to access the Kiali console already exists. Run the following command to obtain the route and Kiali URL:

    $ oc get routes

    While your exact environment may be different, you should see a result that’s something like this:

    NAME                   HOST/PORT                                                PATH      SERVICES               PORT              TERMINATION   WILDCARD
    grafana                grafana-istio-system.127.0.0.1.nip.io                          grafana                http                            None
    istio-ingress          istio-ingress-istio-system.127.0.0.1.nip.io                    istio-ingress          http                            None
    istio-ingressgateway   istio-ingressgateway-istio-system.127.0.0.1.nip.io             istio-ingressgateway   http                            None
    jaeger-query           jaeger-query-istio-system.127.0.0.1.nip.io                     jaeger-query           jaeger-query      edge          None
    kiali                  kiali-istio-system.127.0.0.1.nip.io                            kiali                  <all>                           None
    prometheus             prometheus-istio-system.127.0.0.1.nip.io                       prometheus             http-prometheus                 None
    tracing                tracing-istio-system.127.0.0.1.nip.io                          tracing                tracing           edge          None
  2. Launch a browser and navigate to https://${KIALI_URL} (in the output above, this is kiali-istio-system.127.0.0.1.nip.io). You should see the Kiali console login screen.

    Login Page

    Log in to the Kiali console using the user name and password that you specified in the custom resource file during installation.

Overview page

After you log in you see the Overview page, which provides you with a quick overview of the health of the various namespaces in your system.