You can customize your Red Hat OpenShift Service Mesh by modifying the default Service Mesh custom resource or by creating a new custom resource.

Prerequisites

Red Hat OpenShift Service Mesh custom resources

The istio-system project is used as an example throughout the Service Mesh documentation, but you can use other projects as necessary.

A custom resource allows you to extend the API in an Red Hat OpenShift Service Mesh project or cluster. When you deploy Service Mesh it creates a default ServiceMeshControlPlane that you can modify to change the project parameters.

The Service Mesh operator extends the API by adding the ServiceMeshControlPlane resource type, which enables you to create ServiceMeshControlPlane objects within projects. By creating a ServiceMeshControlPlane object, you instruct the Operator to install a Service Mesh control plane into the project, configured with the parameters you set in the ServiceMeshControlPlane object.

This example ServiceMeshControlPlane definition contains all of the supported parameters and deploys Red Hat OpenShift Service Mesh 1.0 images based on Red Hat Enterprise Linux (RHEL).

The 3scale Istio Adapter is deployed and configured in the custom resource file. It also requires a working 3scale account (SaaS or On-Premises).

Full example istio-installation.yaml
  apiVersion: maistra.io/v1
  kind: ServiceMeshControlPlane
  metadata:
    name: full-install
  spec:

    istio:
      global:
        proxy:
          resources:
            requests:
              cpu: 100m
              memory: 128Mi
            limits:
              cpu: 500m
              memory: 128Mi

      gateways:
        istio-egressgateway:
          autoscaleEnabled: false
        istio-ingressgateway:
          autoscaleEnabled: false

      mixer:
        policy:
          autoscaleEnabled: false

        telemetry:
          autoscaleEnabled: false
          resources:
            requests:
              cpu: 100m
              memory: 1G
            limits:
              cpu: 500m
              memory: 4G

      pilot:
        autoscaleEnabled: false
        traceSampling: 100.0

      kiali:
        enabled: true

      tracing:
        enabled: true
        jaeger:
          template: all-in-one

ServiceMeshControlPlane parameters

The following examples illustrate use of the ServiceMeshControlPlane parameters and the tables provide additional information about supported parameters.

The resources you configure for Red Hat OpenShift Service Mesh with these parameters, including CPUs, memory, and the number of pods, are based on the configuration of your OpenShift cluster. Configure these parameters based on the available resources in your current cluster configuration.

Istio global example

Here is an example that illustrates the Istio global parameters for the ServiceMeshControlPlane and a description of the available parameters with appropriate values.

In order for the 3scale Istio Adapter to work, disablePolicyChecks must be false.

  istio:
    global:
      tag: 1.0.0
      hub: registry.redhat.io/openshift-service-mesh/
      proxy:
        resources:
          requests:
            cpu: 100m
            memory: 128Mi
          limits:
            cpu: 500m
            memory: 128Mi
      mtls:
        enabled: false
      disablePolicyChecks: true
      policyCheckFailOpen: false
      imagePullSecrets:
        - MyPullSecret

See the OpenShift documentation on Scalability and performance for additional details on CPU and memory resources for the containers in your pod.

Table 1. Global parameters
Parameter Description Values Default value

disablePolicyChecks

This boolean indicates whether to enable policy checks

true/false

true

policyCheckFailOpen

This boolean indicates whether traffic is allowed to pass through to the Envoy sidecar when the Mixer policy service cannot be reached

true/false

false

tag

The tag that the Operator uses to pull the Istio images

A valid container image tag

1.0.0

hub

The hub that the Operator uses to pull Istio images

A valid image repo

maistra/ or registry.redhat.io/openshift-service-mesh/

mtls

This controls whether to enable Mutual Transport Layer Security (mTLS) between services by default

true/false

false

imagePullSecrets

If access to the registry providing the Istio images is secure, list an imagePullSecret here

redhat-registry-pullsecret OR quay-pullsecret

None

These parameters are specific to the proxy subset of global parameters.

Table 2. Proxy parameters
Type Parameter Description Values Default value

Resources

cpu

The amount of CPU resources requested for Envoy proxy

CPU resources in cores or millicores based on your environment’s configuration

100m

memory

The amount of memory requested for Envoy proxy

Available memory in bytes based on your environment’s configuration

128Mi

Limits

cpu

The maximum amount of CPU resources requested for Envoy proxy

CPU resources in cores or millicores based on your environment’s configuration

2000m

memory

The maximum amount of memory Envoy proxy is permitted to use

Available memory in bytes based on your environment’s configuration

128Mi

Istio gateway configuration

Here is an example that illustrates the Istio gateway parameters for the ServiceMeshControlPlane and a description of the available parameters with appropriate values.

  gateways:
       istio-egressgateway:
         autoscaleEnabled: false
         autoscaleMin: 1
         autoscaleMax: 5
       istio-ingressgateway:
         autoscaleEnabled: false
         autoscaleMin: 1
         autoscaleMax: 5
Table 3. Istio Gateway parameters
Type Parameter Description Values Default value

istio-egressgateway

autoscaleEnabled

This parameter enables autoscaling.

true/false

true

autoscaleMin

The minimum number of pods to deploy for the egress gateway based on the autoscaleEnabled setting

A valid number of allocatable pods based on your environment’s configuration

1

autoscaleMax

The maximum number of pods to deploy for the egress gateway based on the autoscaleEnabled setting

A valid number of allocatable pods based on your environment’s configuration

5

istio-ingressgateway

autoscaleEnabled

This parameter enables autoscaling.

true/false

true

autoscaleMin

The minimum number of pods to deploy for the ingress gateway based on the autoscaleEnabled setting

A valid number of allocatable pods based on your environment’s configuration

1

autoscaleMax

The maximum number of pods to deploy for the ingress gateway based on the autoscaleEnabled setting

A valid number of allocatable pods based on your environment’s configuration

5

Istio Mixer configuration

Here is an example that illustrates the Mixer parameters for the ServiceMeshControlPlane and a description of the available parameters with appropriate values.

  mixer:
    enabled: true
    policy:
      autoscaleEnabled: false
    telemetry:
      autoscaleEnabled: false
        resources:
          requests:
            cpu: 100m
            memory: 1G
          limits:
            cpu: 500m
            memory: 4G
Table 4. Istio Mixer policy parameters
Parameter Description Values Default value

enabled

This enables Mixer

true/false

true

autoscaleEnabled

This controls whether to enable autoscaling. Disable this for small environments.

true/false

true

autoscaleMin

The minimum number of pods to deploy based on the autoscaleEnabled setting

A valid number of allocatable pods based on your environment’s configuration

1

autoscaleMax

The maximum number of pods to deploy based on the autoscaleEnabled setting

A valid number of allocatable pods based on your environment’s configuration

5

Table 5. Istio Mixer telemetry parameters
Type Parameter Description Values Default

Resources

cpu

The percentage of CPU resources requested for Mixer telemetry

CPU resources in millicores based on your environment’s configuration

1000m

memory

The amount of memory requested for Mixer telemetry

Available memory in bytes based on your environment’s configuration

1G

Limits

cpu

The maximum percentage of CPU resources Mixer telemetry is permitted to use

CPU resources in millicores based on your environment’s configuration

4800m

memory

The maximum amount of memory Mixer telemetry is permitted to use

Available memory in bytes based on your environment’s configuration

4G

Istio Pilot configuration

Here is an example that illustrates the Istio Pilot parameters for the ServiceMeshControlPlane and a description of the available parameters with appropriate values.

  pilot:
    resources:
      requests:
        cpu: 100m
    autoscaleEnabled: false
    traceSampling: 100.0
Table 6. Istio Pilot parameters
Parameter Description Values Default value

cpu

The percentage of CPU resources requested for Pilot

CPU resources in millicores based on your environment’s configuration

500m

memory

The amount of memory requested for Pilot

Available memory in bytes based on your environment’s configuration

2048Mi

traceSampling

This value controls how often random sampling occurs. Note: increase for development or testing.

A valid percentage

1.0

3scale configuration

Here is an example that illustrates the 3scale Istio Adapter parameters for the Red Hat OpenShift Service Mesh custom resource and a description of the available parameters with appropriate values.

  threeScale:
      enabled: false
      PARAM_THREESCALE_LISTEN_ADDR: 3333
      PARAM_THREESCALE_LOG_LEVEL: info
      PARAM_THREESCALE_LOG_JSON: true
      PARAM_THREESCALE_LOG_GRPC: false
      PARAM_THREESCALE_REPORT_METRICS: true
      PARAM_THREESCALE_METRICS_PORT: 8080
      PARAM_THREESCALE_CACHE_TTL_SECONDS: 300
      PARAM_THREESCALE_CACHE_REFRESH_SECONDS: 180
      PARAM_THREESCALE_CACHE_ENTRIES_MAX: 1000
      PARAM_THREESCALE_CACHE_REFRESH_RETRIES: 1
      PARAM_THREESCALE_ALLOW_INSECURE_CONN: false
      PARAM_THREESCALE_CLIENT_TIMEOUT_SECONDS: 10
      PARAM_THREESCALE_GRPC_CONN_MAX_SECONDS: 60
Table 7. 3scale parameters
Parameter Description Values Default value

enabled

Whether to use the 3scale adapter

true/false

false

PARAM_THREESCALE_LISTEN_ADDR

Sets the listen address for the gRPC server

Valid port number

3333

PARAM_THREESCALE_LOG_LEVEL

Sets the minimum log output level.

debug, info, warn, error, or none

info

PARAM_THREESCALE_LOG_JSON

Controls whether the log is formatted as JSON

true/false

true

PARAM_THREESCALE_LOG_GRPC

Controls whether the log contains gRPC info

true/false

true

PARAM_THREESCALE_REPORT_METRICS

Controls whether 3scale system and backend metrics are collected and reported to Prometheus

true/false

true

PARAM_THREESCALE_METRICS_PORT

Sets the port that the 3scale /metrics endpoint can be scrapped from

Valid port number

8080

PARAM_THREESCALE_CACHE_TTL_SECONDS

Time period, in seconds, to wait before purging expired items from the cache

Time period in seconds

300

PARAM_THREESCALE_CACHE_REFRESH_SECONDS

Time period before expiry when cache elements are attempted to be refreshed

Time period in seconds

180

PARAM_THREESCALE_CACHE_ENTRIES_MAX

Max number of items that can be stored in the cache at any time. Set to 0 to disable caching

Valid number

1000

PARAM_THREESCALE_CACHE_REFRESH_RETRIES

The number of times unreachable hosts are retried during a cache update loop

Valid number

1

PARAM_THREESCALE_ALLOW_INSECURE_CONN

Allow to skip certificate verification when calling 3scale APIs. Enabling this is not recommended.

true/false

false

PARAM_THREESCALE_CLIENT_TIMEOUT_SECONDS

Sets the number of seconds to wait before terminating requests to 3scale System and Backend

Time period in seconds

10

PARAM_THREESCALE_GRPC_CONN_MAX_SECONDS

Sets the maximum amount of seconds (+/-10% jitter) a connection may exist before it is closed

Time period in seconds

60

Configuring Kiali

When the Service Mesh Operator creates the ServiceMeshControlPlane it also processes the Kiali resource. The Kiali Operator then uses this object when creating Kaili instances.

The default Kiali parameters specified in the ServiceMeshControlPlane are as follows:

Default Kiali parameters
apiVersion: maistra.io/v1
kind: ServiceMeshControlPlane
spec:
    kiali:
      enabled: true
      dashboard:
        viewOnlyMode: false
      ingress:
        enabled: true
Table 8. Kiali parameters
Parameter Description Values Default value
enabled

This enables or disables Kiali in Service Mesh. Kiali is installed by default. If you do not want to install Kiali, change the enabled value to false.

true/false

true

dashboard
   viewOnlyMode

Whether the Kiali console should be in a view-only mode, not allowing the user to make changes to the Service Mesh.

true/false

false

ingress
   enabled

This enables/disables ingress.

true/false

true

Configuring Kiali for Grafana

When you install Kiali and Grafana as part of Red Hat OpenShift Service Mesh the Operator configures the following by default:

  • Grafana is enabled as an external service for Kaili

  • Grafana authorization for the Kiali console

  • Grafana URL for the Kiali console

Kiali can automatically detect the Grafana URL. However if you have a custom Grafana installation that is not easily auto-detectable by Kiali, you must update the URL value in the ServiceMeshControlPlane resource.

Additional Grafana parameters
spec:
  kiali:
    enabled: true
    dashboard:
      viewOnlyMode: false
      grafanaURL:  "https://grafana-istio-system.127.0.0.1.nip.io"
    ingress:
      enabled: true

Configuring Kiali for Jaeger

When you install Kiali and Jaeger as part of Red Hat OpenShift Service Mesh the Operator configures the following by default:

  • Jaeger is enabled as an external service for Kaili

  • Jaeger authorization for the Kiali console

  • Jaeger URL for the Kiali console

Kiali can automatically detect the Jaeger URL. However if you have a custom Jaeger installation that is not easily auto-detectable by Kiali, you must update the URL value in the ServiceMeshControlPlane resource.

Additional Jaeger parameters
spec:
  kiali:
    enabled: true
    dashboard:
      viewOnlyMode: false
      jaegerURL: "http://jaeger-query-istio-system.127.0.0.1.nip.io"
    ingress:
      enabled: true

Configuring Jaeger

When the Service Mesh Operator creates the ServiceMeshControlPlane resource it also creates the Jaeger resource. The Jaeger Operator then uses this object when creating Jaeger instances.

The default Jaeger parameters specified in the ServiceMeshControlPlane are as follows:

Default Jaeger parameters
  apiVersion: maistra.io/v1
  kind: ServiceMeshControlPlane
  spec:
    istio:
      tracing:
        enabled: true
        ingress:
          enabled: true
Table 9. Jaeger parameters
Parameter Description Values Default value
tracing
   enabled

This enables or disables tracing in Service Mesh. Jaeger is installed by default. If you do not want to install Jaeger, change the enabled value to false.

true/false

true

ingress
   enabled

This enables/disables ingress.

true/false

true

Configuring Elasticsearch

Jaeger can be configured for different storage backends:

  • Memory - Simple in-memory storage, only recommended for development, demo, or testing purposes. This is the default option for the AllInOne deployment strategy. Do NOT use for production environments.

  • Elasticsearch - For production use. This is the default option for the Production deployment strategy.

The default template strategy in the ServiceMeshControlPlane resource is AllInOne. For production, the only supported storage option is Elasticsearch, therefore you must configure the ServiceMeshControlPlane to request the production-elasticsearch template strategy when you deploy Service Mesh within a production environment.

Elasticsearch is a memory intensive application. The initial set of nodes created by the OpenShift Container Platform installation may not be large enough to support the Elasticsearch cluster. Additional nodes must be added to the cluster if you want to run with the recommended amount (or more) memory. Each Elasticsearch node can operate with a lower memory setting though this is not recommended for production deployments.

You should modify the default Elasticsearch configuration to match your use case. You can adjust both the CPU and memory limits for each component by modifying the resources block with valid memory and CPU values.

Default Jaeger parameters for Elasticsearch in production
  apiVersion: maistra.io/v1
  kind: ServiceMeshControlPlane
  spec:
    istio:
      tracing:
        jaeger:
          template: production-elasticsearch
          elasticsearch:
            nodeCount: 3
            redundancyPolicy:
            resources:
              requests:
                memory: "16Gi"
                cpu: "1"
              limits:
                memory: "16Gi"
Table 10. Elasticsearch parameters
Parameter Values Description
nodeCount

integer value

Number of Elasticsearch nodes

cpu

Specified in units of cores (e.g., 200m, 0.5, 1)

Number of central processing units

memory

Specified in units of bytes (e.g., 200Ki, 50Mi, 5Gi)

Memory limit

Table 11. Sample configurations
Parameter Proof of Concept Minimal Deployment
Node count

1

3

Requests CPU

500m

1

Requests memory

1Gi

16Gi

Limits CPU

500m

1

Limits memory

1Gi

16Gi

For production use, you should have no less than 16Gi allocated to each Pod by default, but preferably allocate as much as you can, up to 64Gi per Pod.

Next steps