$ oc adm must-gather \
--image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.14.10
You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported. Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh. For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page. |
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
Red Hat OpenShift Service Mesh addresses a variety of problems in a microservice architecture by creating a centralized point of control in an application. It adds a transparent layer on existing distributed applications without requiring any changes to the application code.
Microservice architectures split the work of enterprise applications into modular services, which can make scaling and maintenance easier. However, as an enterprise application built on a microservice architecture grows in size and complexity, it becomes difficult to understand and manage. Service Mesh can address those architecture problems by capturing or intercepting traffic between services and can modify, redirect, or create new requests to other services.
Service Mesh, which is based on the open source Istio project, provides an easy way to create a network of deployed services that provides discovery, load balancing, service-to-service authentication, failure recovery, metrics, and monitoring. A service mesh also provides more complex operational functionality, including A/B testing, canary releases, access control, and end-to-end authentication.
If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal.
From the Customer Portal, you can:
Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products.
Submit a support case to Red Hat Support.
Access other product documentation.
To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager. Insights provides details about issues and, if available, information on how to solve a problem.
If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version.
When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support.
The must-gather
tool enables you to collect diagnostic information about your
OpenShift Container Platform cluster, including virtual machines and other data related to
Red Hat OpenShift Service Mesh.
For prompt support, supply diagnostic information for both OpenShift Container Platform and Red Hat OpenShift Service Mesh.
The oc adm must-gather
CLI command collects the information from your cluster that is most likely needed for debugging issues, including:
Resource definitions
Service logs
By default, the oc adm must-gather
command uses the default plugin image and writes into ./must-gather.local
.
Alternatively, you can collect specific information by running the command with the appropriate arguments as described in the following sections:
To collect data related to one or more specific features, use the --image
argument with an image, as listed in a following section.
For example:
$ oc adm must-gather \
--image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.14.10
To collect the audit logs, use the -- /usr/bin/gather_audit_logs
argument, as described in a following section.
For example:
$ oc adm must-gather -- /usr/bin/gather_audit_logs
Audit logs are not collected as part of the default set of information to reduce the size of the files. |
When you run oc adm must-gather
, a new pod with a random name is created in a new project on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local
in the current working directory.
For example:
NAMESPACE NAME READY STATUS RESTARTS AGE
...
openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s
openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s
...
Optionally, you can run the oc adm must-gather
command in a specific namespace by using the --run-namespace
option.
For example:
$ oc adm must-gather --run-namespace <namespace> \
--image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.14.10
Access to the cluster as a user with the cluster-admin
role.
The OpenShift Container Platform CLI (oc
) installed.
You can use the oc adm must-gather
CLI command to collect information about your cluster, including features and objects associated with Red Hat OpenShift Service Mesh.
Access to the cluster as a user with the cluster-admin
role.
The OpenShift Container Platform CLI (oc
) installed.
To collect Red Hat OpenShift Service Mesh data with must-gather
, you must specify the Red Hat OpenShift Service Mesh image.
$ oc adm must-gather --image=registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8:2.6
To collect Red Hat OpenShift Service Mesh data for a specific Service Mesh control plane namespace with must-gather
, you must specify the Red Hat OpenShift Service Mesh image and namespace. In this example, after gather,
replace <namespace>
with your Service Mesh control plane namespace, such as istio-system
.
$ oc adm must-gather --image=registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8:2.6 gather <namespace>
This creates a local directory that contains the following items:
The Istio Operator namespace and its child objects
All control plane namespaces and their children objects
All namespaces and their children objects that belong to any service mesh
All Istio custom resource definitions (CRD)
All Istio CRD objects, such as VirtualServices, in a given namespace
All Istio webhooks
The following are the only supported configurations for the Red Hat OpenShift Service Mesh:
OpenShift Container Platform version 4.6 or later.
OpenShift Online and Red Hat OpenShift Dedicated are not supported for Red Hat OpenShift Service Mesh. |
The deployment must be contained within a single OpenShift Container Platform cluster that is not federated.
This release of Red Hat OpenShift Service Mesh is only available on OpenShift Container Platform x86_64.
This release only supports configurations where all Service Mesh components are contained in the OpenShift Container Platform cluster in which it operates. It does not support management of microservices that reside outside of the cluster, or in a multi-cluster scenario.
This release only supports configurations that do not integrate external services such as virtual machines.
For additional information about Red Hat OpenShift Service Mesh lifecycle and supported configurations, refer to the Support Policy.
Red Hat OpenShift Service Mesh provides a number of key capabilities uniformly across a network of services:
Traffic Management - Control the flow of traffic and API calls between services, make calls more reliable, and make the network more robust in the face of adverse conditions.
Service Identity and Security - Provide services in the mesh with a verifiable identity and provide the ability to protect service traffic as it flows over networks of varying degrees of trustworthiness.
Policy Enforcement - Apply organizational policy to the interaction between services, ensure access policies are enforced and resources are fairly distributed among consumers. Policy changes are made by configuring the mesh, not by changing application code.
Telemetry - Gain understanding of the dependencies between services and the nature and flow of traffic between them, providing the ability to quickly identify issues.
This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs).
This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs).
This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs).
This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs).
Red Hat OpenShift Service Mesh contains a remotely exploitable vulnerability, CVE-2021-39156, where an HTTP request with a fragment (a section in the end of a URI that begins with a # character) in the URI path could bypass the Istio URI path-based authorization policies. For instance, an Istio authorization policy denies requests sent to the URI path /user/profile
. In the vulnerable versions, a request with URI path /user/profile#section1
bypasses the deny policy and routes to the backend (with the normalized URI path /user/profile%23section1
), possibly leading to a security incident.
You are impacted by this vulnerability if you use authorization policies with DENY actions and operation.paths
, or ALLOW actions and operation.notPaths
.
With the mitigation, the fragment part of the request’s URI is removed before the authorization and routing. This prevents a request with a fragment in its URI from bypassing authorization policies which are based on the URI without the fragment part.
Istio generates hostnames for both the hostname itself and all matching ports. For instance, a virtual service or Gateway for a host of "httpbin.foo" generates a config matching "httpbin.foo and httpbin.foo:*". However, exact match authorization policies only match the exact string given for the hosts
or notHosts
fields.
Your cluster is impacted if you have AuthorizationPolicy
resources using exact string comparison for the rule to determine hosts or notHosts.
You must update your authorization policy rules to use prefix match instead of exact match. For example, replacing hosts: ["httpbin.com"]
with hosts: ["httpbin.com:*"]
in the first AuthorizationPolicy
example.
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: httpbin
namespace: foo
spec:
action: DENY
rules:
- from:
- source:
namespaces: ["dev"]
to:
- operation:
hosts: [“httpbin.com”,"httpbin.com:*"]
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: httpbin
namespace: default
spec:
action: DENY
rules:
- to:
- operation:
hosts: ["httpbin.example.com:*"]
This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
There are manual steps that must be completed to address CVE-2021-29492 and CVE-2021-31920. |
Istio contains a remotely exploitable vulnerability where an HTTP request path with multiple slashes or escaped slash characters (%2F` or
%5C`) could potentially bypass an Istio authorization policy when path-based authorization rules are used.
For example, assume an Istio cluster administrator defines an authorization DENY policy to reject the request at path /admin
. A request sent to the URL path //admin
will NOT be rejected by the authorization policy.
According to RFC 3986, the path //admin
with multiple slashes should technically be treated as a different path from the /admin
. However, some backend services choose to normalize the URL paths by merging multiple slashes into a single slash. This can result in a bypass of the authorization policy (//admin
does not match /admin
), and a user can access the resource at path /admin
in the backend; this would represent a security incident.
Your cluster is impacted by this vulnerability if you have authorization policies using ALLOW action + notPaths
field or DENY action + paths field
patterns. These patterns are vulnerable to unexpected policy bypasses.
Your cluster is NOT impacted by this vulnerability if:
You don’t have authorization policies.
Your authorization policies don’t define paths
or notPaths
fields.
Your authorization policies use ALLOW action + paths
field or DENY action + notPaths
field patterns. These patterns could only cause unexpected rejection instead of policy bypasses. The upgrade is optional for these cases.
The Red Hat OpenShift Service Mesh configuration location for path normalization is different from the Istio configuration. |
Istio authorization policies can be based on the URL paths in the HTTP request. Path normalization, also known as URI normalization, modifies and standardizes the incoming requests' paths so that the normalized paths can be processed in a standard way. Syntactically different paths may be equivalent after path normalization.
Istio supports the following normalization schemes on the request paths before evaluating against the authorization policies and routing the requests:
Option | Description | Example | Notes |
---|---|---|---|
|
No normalization is done. Anything received by Envoy will be forwarded exactly as-is to any backend service. |
|
This setting is vulnerable to CVE-2021-31920. |
|
This is currently the option used in the default installation of Istio. This applies the |
|
This setting is vulnerable to CVE-2021-31920. |
|
Slashes are merged after the BASE normalization. |
|
Update to this setting to mitigate CVE-2021-31920. |
|
The strictest setting when you allow all traffic by default. This setting is recommended, with the caveat that you must thoroughly test your authorization policies routes. Percent-encoded slash and backslash characters ( |
|
Update to this setting to mitigate CVE-2021-31920. This setting is more secure, but also has the potential to break applications. Test your applications before deploying to production. |
The normalization algorithms are conducted in the following order:
Percent-decode %2F
, %2f
, %5C
and %5c
.
The RFC 3986 and other normalization implemented by the normalize_path
option in Envoy.
Merge slashes.
While these normalization options represent recommendations from HTTP standards and common industry practices, applications may interpret a URL in any way it chooses to. When using denial policies, ensure that you understand how your application behaves. |
Ensuring Envoy normalizes request paths to match your backend services' expectations is critical to the security of your system.
The following examples can be used as a reference for you to configure your system.
The normalized URL paths, or the original URL paths if NONE
is selected, will be:
Used to check against the authorization policies.
Forwarded to the backend application.
If your application… | Choose… |
---|---|
Relies on the proxy to do normalization |
|
Normalizes request paths based on RFC 3986 and does not merge slashes. |
|
Normalizes request paths based on RFC 3986 and merges slashes, but does not decode percent-encoded slashes. |
|
Normalizes request paths based on RFC 3986, decodes percent-encoded slashes, and merges slashes. |
|
Processes request paths in a way that is incompatible with RFC 3986. |
|
To configure path normalization for Red Hat OpenShift Service Mesh, specify the following in your ServiceMeshControlPlane
. Use the configuration examples to help determine the settings for your system.
spec:
global:
pathNormalization: <option>
This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
This release also added support for configuring cipher suites.
This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
There are manual steps that must be completed to address CVE-2020-8663. |
The fix for CVE-2020-8663: envoy: Resource exhaustion when accepting too many connections
added a configurable limit on downstream connections. The configuration option for this limit must be configured to mitigate this vulnerability.
These manual steps are required to mitigate this CVE whether you are using the 1.1 version or the 1.0 version of Red Hat OpenShift Service Mesh. |
This new configuration option is called overload.global_downstream_max_connections
, and it is configurable as a proxy runtime
setting. Perform the following steps to configure limits at the Ingress Gateway.
Create a file named bootstrap-override.json
with the following text to force the proxy to override the bootstrap template and load runtime configuration from disk:
{ "runtime": { "symlink_root": "/var/lib/istio/envoy/runtime" } }
Create a secret from the bootstrap-override.json
file, replacing <SMCPnamespace> with the namespace where you created the service mesh control plane (SMCP):
$ oc create secret generic -n <SMCPnamespace> gateway-bootstrap --from-file=bootstrap-override.json
Update the SMCP configuration to activate the override.
apiVersion: maistra.io/v1
kind: ServiceMeshControlPlane
spec:
istio:
gateways:
istio-ingressgateway:
env:
ISTIO_BOOTSTRAP_OVERRIDE: /var/lib/istio/envoy/custom-bootstrap/bootstrap-override.json
secretVolumes:
- mountPath: /var/lib/istio/envoy/custom-bootstrap
name: custom-bootstrap
secretName: gateway-bootstrap
To set the new configuration option, create a secret that has the desired value for the overload.global_downstream_max_connections
setting. The following example uses a value of 10000
:
$ oc create secret generic -n <SMCPnamespace> gateway-settings --from-literal=overload.global_downstream_max_connections=10000
Update the SMCP again to mount the secret in the location where Envoy is looking for runtime configuration:
apiVersion: maistra.io/v1
kind: ServiceMeshControlPlane
spec:
template: default
#Change the version to "v1.0" if you are on the 1.0 stream.
version: v1.1
istio:
gateways:
istio-ingressgateway:
env:
ISTIO_BOOTSTRAP_OVERRIDE: /var/lib/istio/envoy/custom-bootstrap/bootstrap-override.json
secretVolumes:
- mountPath: /var/lib/istio/envoy/custom-bootstrap
name: custom-bootstrap
secretName: gateway-bootstrap
# below is the new secret mount
- mountPath: /var/lib/istio/envoy/runtime
name: gateway-settings
secretName: gateway-settings
When updating from Elasticsearch 5 to Elasticsearch 6, you must delete your Jaeger instance, then recreate the Jaeger instance because of an issue with certificates. Re-creating the Jaeger instance triggers creating a new set of certificates. If you are using persistent storage the same volumes can be mounted for the new Jaeger instance as long as the Jaeger name and namespace for the new Jaeger instance are the same as the deleted Jaeger instance.
Determine the name of your Jaeger custom resource file:
$ oc get jaeger -n istio-system
You should see something like the following:
NAME AGE
jaeger 3d21h
Copy the generated custom resource file into a temporary directory:
$ oc get jaeger jaeger -oyaml -n istio-system > /tmp/jaeger-cr.yaml
Delete the Jaeger instance:
$ oc delete jaeger jaeger -n istio-system
Recreate the Jaeger instance from your copy of the custom resource file:
$ oc create -f /tmp/jaeger-cr.yaml -n istio-system
Delete the copy of the generated custom resource file:
$ rm /tmp/jaeger-cr.yaml
Before you begin, create a copy of your Jaeger custom resource file.
Delete the Jaeger instance by deleting the custom resource file:
$ oc delete -f <jaeger-cr-file>
For example:
$ oc delete -f jaeger-prod-elasticsearch.yaml
Recreate your Jaeger instance from the backup copy of your custom resource file:
$ oc create -f <jaeger-cr-file>
Validate that your Pods have restarted:
$ oc get pods -n jaeger-system -w
This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
This release of Red Hat OpenShift Service Mesh addresses a security vulnerability.
This release of Red Hat OpenShift Service Mesh adds support for a disconnected installation.
This release of Red Hat OpenShift Service Mesh adds support for Istio 1.4.6 and Jaeger 1.17.1.
If you are updating from Red Hat OpenShift Service Mesh 1.0 to 1.1, you must update the ServiceMeshControlPlane
resource to update the control plane components to the new version.
In the web console, click the Red Hat OpenShift Service Mesh Operator.
Click the Project menu and choose the project where your ServiceMeshControlPlane
is deployed from the list, for example istio-system
.
Click the name of your control plane, for example basic-install
.
Click YAML and add a version field to the spec:
of your ServiceMeshControlPlane
resource. For example, to update to Red Hat OpenShift Service Mesh 1.1.0, add version: v1.1
.
spec: version: v1.1 ...
The version field specifies the version of Service Mesh to install and defaults to the latest available version.
Note that support for Red Hat OpenShift Service Mesh v1.0 ended in October, 2020. You must upgrade to either v1.1 or v2.0. |
Some features available in previous releases have been deprecated or removed.
Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.
The following custom resources were deprecated in release 1.1.5 and were removed in release 1.1.12
Policy
- The Policy
resource is deprecated and will be replaced by the PeerAuthentication
resource in a future release.
MeshPolicy
- The MeshPolicy
resource is deprecated and will be replaced by the PeerAuthentication
resource in a future release.
v1alpha1
RBAC API -The v1alpha1 RBAC policy is deprecated by the v1beta1 AuthorizationPolicy
. RBAC (Role Based Access Control) defines ServiceRole
and ServiceRoleBinding
objects.
ServiceRole
ServiceRoleBinding
RbacConfig
- RbacConfig
implements the Custom Resource Definition for controlling Istio RBAC behavior.
ClusterRbacConfig
(versions prior to Red Hat OpenShift Service Mesh 1.0)
ServiceMeshRbacConfig
(Red Hat OpenShift Service Mesh version 1.0 and later)
In Kiali, the login
and LDAP
strategies are deprecated. A future version will introduce authentication using OpenID providers.
The following components are also deprecated in this release and will be replaced by the Istiod component in a future release.
Mixer - access control and usage policies
Pilot - service discovery and proxy configuration
Citadel - certificate generation
Galley - configuration validation and distribution
These limitations exist in Red Hat OpenShift Service Mesh:
Red Hat OpenShift Service Mesh does not support IPv6, as it is not supported by the upstream Istio project, nor fully supported by OpenShift Container Platform.
Graph layout - The layout for the Kiali graph can render differently, depending on your application architecture and the data to display (number of graph nodes and their interactions). Because it is difficult if not impossible to create a single layout that renders nicely for every situation, Kiali offers a choice of several different layouts. To choose a different layout, you can choose a different Layout Schema from the Graph Settings menu.
The first time you access related services such as Jaeger and Grafana, from the Kiali console, you must accept the certificate and re-authenticate using your OpenShift Container Platform login credentials. This happens due to an issue with how the framework displays embedded pages in the console.
These are the known issues in Red Hat OpenShift Service Mesh:
Jaeger/Kiali Operator upgrade blocked with operator pending When upgrading the Jaeger or Kiali Operators with Service Mesh 1.0.x installed, the operator status shows as Pending.
Workaround: See the linked Knowledge Base article for more information.
Istio-14743 Due to limitations in the version of Istio that this release of Red Hat OpenShift Service Mesh is based on, there are several applications that are currently incompatible with Service Mesh. See the linked community issue for details.
MAISTRA-858 The following Envoy log messages describing deprecated options and configurations associated with Istio 1.1.x are expected:
[2019-06-03 07:03:28.943][19][warning][misc] [external/envoy/source/common/protobuf/utility.cc:129] Using deprecated option 'envoy.api.v2.listener.Filter.config'. This configuration will be removed from Envoy soon.
[2019-08-12 22:12:59.001][13][warning][misc] [external/envoy/source/common/protobuf/utility.cc:174] Using deprecated option 'envoy.api.v2.Listener.use_original_dst' from file lds.proto. This configuration will be removed from Envoy soon.
MAISTRA-806 Evicted Istio Operator Pod causes mesh and CNI not to deploy.
Workaround: If the istio-operator
pod is evicted while deploying the control pane, delete the evicted istio-operator
pod.
MAISTRA-681 When the control plane has many namespaces, it can lead to performance issues.
MAISTRA-465 The Maistra Operator fails to create a service for operator metrics.
MAISTRA-453 If you create a new project and deploy pods immediately, sidecar injection does not occur. The operator fails to add the maistra.io/member-of
before the pods are created, therefore the pods must be deleted and recreated for sidecar injection to occur.
MAISTRA-158 Applying multiple gateways referencing the same hostname will cause all gateways to stop functioning.
New issues for Kiali should be created in the OpenShift Service Mesh project with the |
These are the known issues in Kiali:
KIALI-2206 When you are accessing the Kiali console for the first time, and there is no cached browser data for Kiali, the “View in Grafana” link on the Metrics tab of the Kiali Service Details page redirects to the wrong location. The only way you would encounter this issue is if you are accessing Kiali for the first time.
KIALI-507 Kiali does not support Internet Explorer 11. This is because the underlying frameworks do not support Internet Explorer. To access the Kiali console, use one of the two most recent versions of the Chrome, Edge, Firefox or Safari browser.
The following issues been resolved in the current release:
MAISTRA-2371 Handle tombstones in listerInformer. The updated cache codebase was not handling tombstones when translating the events from the namespace caches to the aggregated cache, leading to a panic in the go routine.
OSSM-542 Galley is not using the new certificate after rotation.
OSSM-99 Workloads generated from direct pod without labels may crash Kiali.
OSSM-93 IstioConfigList can’t filter by two or more names.
OSSM-92 Cancelling unsaved changes on the VS/DR YAML edit page does not cancel the changes.
OSSM-90 Traces not available on the service details page.
MAISTRA-1649 Headless services conflict when in different namespaces. When deploying headless services within different namespaces the endpoint configuration is merged and results in invalid Envoy configurations being pushed to the sidecars.
MAISTRA-1541 Panic in kubernetesenv when the controller is not set on owner reference. If a pod has an ownerReference which does not specify the controller, this will cause a panic within the kubernetesenv cache.go
code.
MAISTRA-1352 Cert-manager Custom Resource Definitions (CRD) from the control plane installation have been removed for this release and future releases. If you have already installed Red Hat OpenShift Service Mesh, the CRDs must be removed manually if cert-manager is not being used.
MAISTRA-1001 Closing HTTP/2 connections could lead to segmentation faults in istio-proxy
.
MAISTRA-932 Added the requires
metadata to add dependency relationship between Jaeger Operator and OpenShift Elasticsearch Operator. Ensures that when the Jaeger Operator is installed, it automatically deploys the OpenShift Elasticsearch Operator if it is not available.
MAISTRA-862 Galley dropped watches and stopped providing configuration to other components after many namespace deletions and re-creations.
MAISTRA-833 Pilot stopped delivering configuration after many namespace deletions and re-creations.
MAISTRA-684 The default Jaeger version in the istio-operator
is 1.12.0, which does not match Jaeger version 1.13.1 that shipped in Red Hat OpenShift Service Mesh 0.12.TechPreview.
MAISTRA-622 In Maistra 0.12.0/TP12, permissive mode does not work. The user has the option to use Plain text mode or Mutual TLS mode, but not permissive.
MAISTRA-572 Jaeger cannot be used with Kiali. In this release Jaeger is configured to use the OAuth proxy, but is also only configured to work through a browser and does not allow service access. Kiali cannot properly communicate with the Jaeger endpoint and it considers Jaeger to be disabled. See also TRACING-591.
MAISTRA-357 In OpenShift 4 Beta on AWS, it is not possible, by default, to access a TCP or HTTPS service through the ingress gateway on a port other than port 80. The AWS load balancer has a health check that verifies if port 80 on the service endpoint is active. Without a service running on port 80, the load balancer health check fails.
MAISTRA-348 OpenShift 4 Beta on AWS does not support ingress gateway traffic on ports other than 80 or 443. If you configure your ingress gateway to handle TCP traffic with a port number other than 80 or 443, you have to use the service hostname provided by the AWS load balancer rather than the OpenShift router as a workaround.
MAISTRA-193 Unexpected console info messages are visible when health checking is enabled for citadel.
Bug 1821432 Toggle controls in OpenShift Container Platform Control Resource details page do not update the CR correctly. UI Toggle controls in the Service Mesh Control Plane (SMCP) Overview page in the OpenShift Container Platform web console sometimes update the wrong field in the resource. To update a ServiceMeshControlPlane resource, edit the YAML content directly or update the resource from the command line instead of clicking the toggle controls.
KIALI-3239 If a Kiali Operator pod has failed with a status of “Evicted” it blocks the Kiali operator from deploying. The workaround is to delete the Evicted pod and redeploy the Kiali operator.
KIALI-3118 After changes to the ServiceMeshMemberRoll, for example adding or removing projects, the Kiali pod restarts and then displays errors on the Graph page while the Kiali pod is restarting.
KIALI-3096 Runtime metrics fail in Service Mesh. There is an OAuth filter between the Service Mesh and Prometheus, requiring a bearer token to be passed to Prometheus before access is granted. Kiali has been updated to use this token when communicating to the Prometheus server, but the application metrics are currently failing with 403 errors.
KIALI-3070 This bug only affects custom dashboards, not the default dashboards. When you select labels in metrics settings and refresh the page, your selections are retained in the menu but your selections are not displayed on the charts.
KIALI-2686 When the control plane has many namespaces, it can lead to performance issues.