$ oc explain istios.spec.values
If you are moving from Red Hat OpenShift Service Mesh 2.6 to Red Hat OpenShift Service Mesh 3, read the content in this section first as it contains important information and explanations on the differences between the versions. These differences have a direct impact on your installation and configuration of OpenShift Service Mesh 3.
If you are a current Red Hat OpenShift Service Mesh user, there are a number of important differences you need to understand between OpenShift Service Mesh 2 and OpenShift Service Mesh 3 before you migrate, including the following:
A new Operator
Integrations like Observability and Kiali are installed separately
New resources: Istio
and IstioCNI
Scoping of a mesh with discoverySelectors
and labels
New considerations for sidecar injection
Support for multiple control planes
Independently managed gateways
Explicit Istio OpenShift route creation
Canary upgrades
Support for Istio multi-cluster topologies
Support for Istioctl
Change to Kubernetes network policy management
Transport layer security (TLS) configuration change
You must be using OpenShift Service Mesh 2.6 to migrate to OpenShift Service Mesh 3.
Red Hat OpenShift Service Mesh 3 is a major update with a feature set closer to the Istio project. Whereas OpenShift Service Mesh 2 was based on the midstream Maistra project, OpenShift Service Mesh 3 is based directly on Istio. This means OpenShift Service Mesh 3 is managed using a different, simplified Operator and provides greater support for the latest stable features of Istio.
This alignment with the Istio project along with lessons learned in the first two major releases of OpenShift Service Mesh have resulted in the following changes:
OpenShift Service Mesh 1 and 2 were based on Istio, and included additional functionality that was maintained as part of the midstream Maistra project, but not part of the upstream Istio project. While this provided extra features to OpenShift Service Mesh users, the effort to maintain Maistra meant that OpenShift Service Mesh 2 was usually several releases behind Istio, and did not support major features like multi-cluster deployment. Since the release of OpenShift Service Mesh 1 and 2, Istio has matured to cover most of the use cases addressed by Maistra.
Basing OpenShift Service Mesh 3 directly on Istio ensures that OpenShift Service Mesh 3 supports users on the latest stable Istio features while Red Hat contributes directly to the Istio community on behalf of its customers.
OpenShift Service Mesh 3 uses an Operator that is maintained upstream as the Sail Operator in the istio-ecosystem organization on GitHub. The OpenShift Service Mesh 3 Operator is smaller in scope and includes significant changes from the Operator used in OpenShift Service Mesh 2:
The Istio
resource replaces the ServiceMeshControlPlane
resource.
The IstioCNI
resource manages the Istio Container Network Interface (CNI).
Red Hat OpenShift Observability components are installed and configured separately.
You can install the OpenShift Service Mesh 3 Operator, or you can run OpenShift Service Mesh 2.6 and OpenShift Service Mesh 3 in the same cluster using either the multi-tenant deployment model, or the cluster-wide model.
Red Hat OpenShift Service Mesh 3 uses two new resources:
Istio
resource
IstioCNI
resource
OpenShift Service Mesh 2 uses a resource called ServiceMeshControlPlane
to configure Istio. In OpenShift Service Mesh 3, the ServiceMeshControlPlane
resource is replaced with a resource called Istio
.
The Istio
resource contains a spec.values
field that derives its schema from Istio’s Helm chart values. This means that configuration examples from the community Istio documentation can often be applied directly to the OpenShift Service Mesh 3 Istio
resource.
The Istio
resource provides an additional validation schema enabling the ability to explore the resource running the following the OpenShift command line interface (CLI) command:
$ oc explain istios.spec.values
The Istio Container Network Interface (CNI) node agent is used to configure traffic redirection for pods in the mesh. It runs as a daemon set, on every node, with elevated privileges.
In OpenShift Service Mesh 2, the Operator deployed an Istio CNI instance for each minor version of Istio present in the cluster, and pods were automatically annotated during sidecar injection so they picked up the correct Istio CNI. While this meant that the management of Istio CNI was mostly hidden from you, it obscured the fact that the Istio CNI agent has an independent lifecycle from the Istio control plane and, in some cases, the Istio CNI agent must be be upgraded separately.
For these reasons, the OpenShift Service Mesh 3 Operator manages the Istio CNI node agent with a separate resource called IstioCNI
. A single instance of this resource is shared by all Istio control planes, which are managed by Istio
resources.
A significant change in Red Hat OpenShift Service Mesh 3 is that the Operator no longer installs and manages observability components such as Prometheus and Grafana with the Istio control plane. It also no longer installs and manages Red Hat OpenShift distributed tracing platform components such as distributed tracing platform (Tempo) and Red Hat OpenShift distributed tracing data collection (previously Jaeger and Elasticsearch), or Kiali.
The OpenShift Service Mesh 3 Operator limits its scope to Istio-related resources, with observability components supported and managed by the independent Operators that make up Red Hat OpenShift Observability, such as the following:
Logging
User workload monitoring
Red Hat OpenShift distributed tracing platform
Kiali and the OpenShift Service Mesh Console (OSSMC) plugin are still supported with the Kiali Operator provided by Red Hat.
This simplification greatly reduces the footprint and complexity of OpenShift Service Mesh 3, while providing better, production-grade support for observability through Red Hat OpenShift Observability components.
In OpenShift Service Mesh 2.4, a cluster-wide mode was introduced to allow a mesh to be cluster-scoped, with the option to limit the mesh using an Istio feature called discoverySelectors
. Using discoverySelectors
limits the Istio control plane’s visibility to a set of namespaces defined with a label selector. This aligned with how community Istio worked, and allowed Istio to manage cluster-level resources. For more information, see "Labels and Selectors".
OpenShift Service Mesh 3 makes all meshes cluster-wide by default. This change means that Istio control planes are all cluster-scoped resources and the resources ServiceMeshMemberRoll
and ServiceMeshMember
are no longer present, with control planes watching, or discovering, the entire cluster by default. The control plane’s discovery of namespaces can be limited using the discoverySelectors
feature.
Red Hat OpenShift Service Mesh 2 supported using pod annotations and labels to configure sidecar injection and there was no need to indicate which control plane a workload belonged to.
With OpenShift Service Mesh 3, even though the Istio control plane discovers a namespace, the workloads present in that namespace still require sidecar proxies to be included as workloads in the service mesh, and to be able to use Istio’s many features.
In OpenShift Service Mesh 3, sidecar injection works the same way as it does for Istio, with pod or namespace labels used to trigger sidecar injection. However, it might be necessary to include a label that indicates which control plane the workload belongs to.
The Istio Project has deprecated pod annotations in favor of labels for sidecar injection. |
When an Istio
resource has the name default
and InPlace
upgrades are used, there is a single IstioRevision
with the name default
and the label istio-injection=enabled
for sidecar injection.
However, an IstioRevision
resource is required to have a different name in the following cases:
Multiple control plane instances are present.
A RevisionBased
, canary-style control plane upgrade is in progress.
If there are running multiple control plane instances, or you chose the RevisionBased
update strategy during your OpenShift Service Mesh 3 installation, then an IstioRevision
resource is required to have a different name than default
. When that happens, it is necessary to use a label that indicates which control plane revision the workloads belong to by specifying istio.io/rev=<istiorevision_name>
.
These labels can be applied at the workload or namespace level.
You can inspect available revisions by running the following command:
$ oc get istiorevision
Red Hat OpenShift Service Mesh 3 supports multiple service meshes in the same cluster, but in a different manner than in OpenShift Service Mesh 2. A cluster administrator must create multiple Istio
instances and then configure discoverySelectors
appropriately to ensure that there is no overlap between mesh namespaces.
As Istio
resources are cluster-scoped, they must have unique names to represent unique meshes within the same cluster. The OpenShift Service Mesh 3 Operator uses this unique name to create a resource called IstioRevision
with a name in the format of {Istio name}
or {Istio name}-{Istio version}
.
Each instance of IstioRevision
is responsible for managing a single control plane. Workloads are assigned to a specific control plane using Istio’s revision labels of the format istio.io/rev={IstioRevision name}
. The name with the version identifier becomes important to support canary-style control plane upgrades.
In Istio, gateways are used to manage traffic entering (ingress) and exiting (egress) the mesh. Red Hat OpenShift Service Mesh 2 deployed and managed an ingress gateway and an egress gateway with the Service Mesh control plane. Both an ingress gateway and an egress gateway were configured using the ServiceMeshControlPlane
resource.
The OpenShift Service Mesh 3 Operator does not create or manage gateways.
Instead, gateways in OpenShift Service Mesh 3 are created and managed independent of the Operator and control plane using gateway injection or the Kubernetes Gateway API. This provides greater flexibility and ensures that gateways can be fully customized and managed as part of a Red Hat OpenShift GitOps pipeline. This allows the gateways to be deployed and managed alongside their applications with the same lifecycle.
This change was made for two reasons:
To start with a gateway configuration that can expand over time to meet the more robust needs of a production environment.
Gateways are better managed together with their corresponding workloads.
Gateways may continue to be deployed onto nodes or namespaces independent of applications. For example, a centralized gateway node. Istio gateways also remain eligible to be deployed on OpenShift Container Platform infrastructure nodes.
If you are using OpenShift Service Mesh 2.6, and have not migrated from ServiceMeshControlPlane
defined gateways to gateway injection, then you must follow the OpenShift Service Mesh 2.x gateway migration procedure before you can move to OpenShift Service Mesh 3.
An OpenShift Route
resource allows an application to be exposed with a public URL using OpenShift Container Platform Ingress Operator for managing HAProxy based Ingress controllers.
Red Hat OpenShift Service Mesh 2 used Istio OpenShift Routing (IOR) that automatically created and managed OpenShift routes for Istio gateways. While this was convenient, as the Operator managed these routes for you, it also caused confusion around ownership as many Route
resources are managed by administrators. Istio OpenShift Routing also lacked the ability to configure an independent Route
resource, created unnecessary routes, and exhibited unpredictable behavior during updates.
Thus, in OpenShift Service Mesh 3, when a Route
is desired to expose an Istio gateway, you must create and manage it manually. You can also expose an Istio gateway through a Kubernetes service of type LoadBalancer
if a route is not desired.
Red Hat OpenShift Service Mesh 2 supported only in-place style updates, which created risk for large meshes where, after the control plane was updated, all workloads must update to the new control plane version without a simple way to roll back if something goes wrong.
OpenShift Service Mesh 3 retains support for simple in-place style updates, and adds support for canary-style updatess of the Istio control plane using Istio’s revision feature.
The Istio
resource manages Istio revision labels using the IstioRevision
resource. When the Istio
resource’s updateStrategy
type is set to RevisionBased
, it creates Istio revision labels using the Istio
resource’s name combined with the Istio version, for example mymesh-v1-21-2
.
During an updates, a new IstioRevision
deploys the new Istio control plane with an updated revision label, for example mymesh-v1-22-0
. Workloads can then be migrated between control planes using the revision label on namespaces or workloads, for example istio.io/rev=mymesh-v1-22-0
.
Setting your updateStrategy
to RevisionBased
also has implications for integrations, such as the cert-manager tool, and gateways.
You can set updateStrategy
to RevisionBased
to use canary updates.
Be aware that setting the updateStrategy
to RevisionBased
also has implications for some integrations with OpenShift Service Mesh, such as the cert-manager tool integration.
Red Hat OpenShift Service Mesh 2 supported one form of multi-cluster, federation, which was introduced in OpenShift Service Mesh 2.1. Each cluster maintained its own independent control plane in this topology, with services only shared between those meshes on an as-needed basis.
Communication between federated meshes is through Istio gateways, so there was no need for Service Mesh control planes to watch remote Kubernetes control planes, as is the case with Istio’s multi-cluster service mesh topologies. Federation is ideal where service meshes are loosely coupled, such as those managed by different administrative teams.
OpenShift Service Mesh 3 introduces support for the following Istio multi-cluster topologies as well:
Multi-Primary
Primary-Remote
External control planes
These topologies effectively stretch a single, unified service mesh across multiple clusters, which is ideal when all clusters involved are managed by the same administrative team. Istio’s multi-cluster topologies are also ideal for implementing high-availability or failover use cases across a commonly managed set of applications.
Red Hat OpenShift Service Mesh 1 and 2 did not include support for Istioctl, the command line utility for the Istio project that includes many diagnostic and debugging utilities. OpenShift Service Mesh 3 introduces support for Istioctl for select commands.
Command | Description |
---|---|
|
Manage control plane ( |
|
Analyze Istio configuration and print validation messages |
|
Cluster information and log capture support tool |
|
Generate the autocompletion script for the specified shell |
|
Create a secret with credentials to allow Istio to access remote Kubernetes |
|
Help about any command |
|
Commands related to Istio manifests |
|
Retrieve information about proxy configuration from Envoy (Kubernetes only) |
|
Retrieves the synchronization status of each Envoy in the mesh |
|
Lists the remote clusters each |
|
Prints out build version information |
Installation and management of Istio is only supported by the OpenShift Service Mesh 3 Operator.
By default, Red Hat OpenShift Service Mesh 2 created Kubernetes NetworkPolicy
resources with the following behavior:
Ensured network applications and the control plane could communicate with each other.
Restricted ingress for mesh applications to only member projects.
OpenShift Service Mesh 3 does not create these policies. Instead, you must configure the level of isolation required for your environment. Istio provides fine grained access control of service mesh workloads through Authorization Policies. For more information, see "Authorization Policies".
In Red Hat OpenShift Service Mesh 2, you created the ServiceMeshControlPlane
resource, and enabled mTLS strict mode by setting spec.security.dataPlane.mtls
to true
.
You were able to set the minimum and maximum TLS protocol versions by setting the spec.security.controlPlane.tls.minProtocolVersion
or spec.security.controlPlane.tls.maxProtocolVersion
in your ServiceMeshControlPlane
resource.
In OpenShift Service Mesh 3, the Istio
resource replaces the ServiceMeshControlPlane
resource and does not include these settings.
To enable enable mTLS strict mode in OpenShift Service Mesh 3, you must apply the corresponding PeerAuthentication
and DestinationRule
resources.
In OpenShift Service Mesh 3, you can enable the minimum TLS protocol by setting spec.meshConfig.tlsDefaults.minProtocolVersion
in your Istio
resource. For more information, see "Istio Workload Minimum TLS Version Configuration".
In OpenShift Service Mesh 2 and OpenShift Service Mesh 3, auto mTLS
remains enabled by default.
Labels and Selectors (Kubernetes documentation)
Istio Workload Minimum TLS Version Configuration (Istio documentation)
Authorization Policies (Istio documentation)