×

Minor release updates

If you installed the logging Operators using the Automatic update approval option, your Operators receive minor version updates automatically. You do not need to complete any manual update steps.

If you installed the logging Operators using the Manual update approval option, you must manually approve minor version updates. For more information, see Manually approving a pending Operator update.

Major release updates

For major version updates you must complete some manual steps.

For major release version compatibility and support information, see OpenShift Operator Life Cycles.

Upgrading the Red Hat OpenShift Logging Operator to watch all namespaces

In logging 5.7 and older versions, the Red Hat OpenShift Logging Operator only watches the openshift-logging namespace. If you want the Red Hat OpenShift Logging Operator to watch all namespaces on your cluster, you must redeploy the Operator. You can complete the following procedure to redeploy the Operator without deleting your logging components.

Prerequisites
  • You have installed the OpenShift CLI (oc).

  • You have administrator permissions.

Procedure
  1. Delete the subscription by running the following command:

    $ oc -n openshift-logging delete subscription <subscription>
  2. Delete the Operator group by running the following command:

    $ oc -n openshift-logging delete operatorgroup <operator_group_name>
  3. Delete the cluster service version (CSV) by running the following command:

    $ oc delete clusterserviceversion cluster-logging.<version>
  4. Redeploy the Red Hat OpenShift Logging Operator by following the "Installing Logging" documentation.

Verification
  • Check that the targetNamespaces field in the OperatorGroup resource is not present or is set to an empty string.

    To do this, run the following command and inspect the output:

    $ oc get operatorgroup <operator_group_name> -o yaml
    Example output
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: openshift-logging-f52cn
      namespace: openshift-logging
    spec:
      upgradeStrategy: Default
    status:
      namespaces:
      - ""
    # ...

Updating the Red Hat OpenShift Logging Operator

To update the Red Hat OpenShift Logging Operator to a new major release version, you must modify the update channel for the Operator subscription.

Prerequisites
  • You have installed the Red Hat OpenShift Logging Operator.

  • You have administrator permissions.

  • You have access to the OpenShift Container Platform web console and are viewing the Administrator perspective.

Procedure
  1. Navigate to OperatorsInstalled Operators.

  2. Select the openshift-logging project.

  3. Click the Red Hat OpenShift Logging Operator.

  4. Click Subscription. In the Subscription details section, click the Update channel link. This link text might be stable or stable-5.9, depending on your current update channel.

  5. In the Change Subscription Update Channel window, select the latest major version update channel, stable-5.9, and click Save. Note the cluster-logging.v5.9.<z> version.

  6. Wait for a few seconds, and then go to OperatorsInstalled Operators to verify that the Red Hat OpenShift Logging Operator version matches the latest cluster-logging.v5.9.<z> version.

  7. On the OperatorsInstalled Operators page, wait for the Status field to report Succeeded.

  8. Check if the LokiStack custom resource contains the v13 schema version and add it if it is missing. For correctly adding the v13 schema version, see "Upgrading the LokiStack storage schema".

Updating the Loki Operator

To update the Loki Operator to a new major release version, you must modify the update channel for the Operator subscription.

Prerequisites
  • You have installed the Loki Operator.

  • You have administrator permissions.

  • You have access to the OpenShift Container Platform web console and are viewing the Administrator perspective.

Procedure
  1. Navigate to OperatorsInstalled Operators.

  2. Select the openshift-operators-redhat project.

  3. Click the Loki Operator.

  4. Click Subscription. In the Subscription details section, click the Update channel link. This link text might be stable or stable-5.y, depending on your current update channel.

  5. In the Change Subscription Update Channel window, select the latest major version update channel, stable-5.y, and click Save. Note the loki-operator.v5.y.z version.

  6. Wait for a few seconds, then click OperatorsInstalled Operators. Verify that the Loki Operator version matches the latest loki-operator.v5.y.z version.

  7. On the OperatorsInstalled Operators page, wait for the Status field to report Succeeded.

  8. Check if the LokiStack custom resource contains the v13 schema version and add it if it is missing. For correctly adding the v13 schema version, see "Upgrading the LokiStack storage schema".

Upgrading the LokiStack storage schema

If you are using the Red Hat OpenShift Logging Operator with the Loki Operator, the Red Hat OpenShift Logging Operator 5.9 or later supports the v13 schema version in the LokiStack custom resource. Upgrading to the v13 schema version is recommended because it is the schema version to be supported going forward.

Procedure
  • Add the v13 schema version in the LokiStack custom resource as follows:

    apiVersion: loki.grafana.com/v1
    kind: LokiStack
    # ...
    spec:
    # ...
      storage:
        schemas:
        # ...
          version: v12 (1)
        - effectiveDate: "<yyyy>-<mm>-<future_dd>" (2)
          version: v13
    # ...
    1 Do not delete. Data persists in its original schema version. Keep the previous schema versions to avoid data loss.
    2 Set a future date that has not yet started in the Coordinated Universal Time (UTC) time zone.

    To edit the LokiStack custom resource, you can run the oc edit command:

    $ oc edit lokistack <name> -n openshift-logging
Verification
  • On or after the specified effectiveDate date, check that there is no LokistackSchemaUpgradesRequired alert in the web console in AdministratorObserveAlerting.

Updating the OpenShift Elasticsearch Operator

To update the OpenShift Elasticsearch Operator to the current version, you must modify the subscription.

The Logging 5.9 release does not contain an updated version of the OpenShift Elasticsearch Operator. If you currently use the OpenShift Elasticsearch Operator released with Logging 5.8, it will continue to work with Logging until the EOL of Logging 5.8. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. For more information on the Logging lifecycle dates, see Platform Agnostic Operators.

Prerequisites
  • If you are using Elasticsearch as the default log store, and Kibana as the UI, update the OpenShift Elasticsearch Operator before you update the Red Hat OpenShift Logging Operator.

    If you update the Operators in the wrong order, Kibana does not update and the Kibana custom resource (CR) is not created. To fix this issue, delete the Red Hat OpenShift Logging Operator pod. When the Red Hat OpenShift Logging Operator pod redeploys, it creates the Kibana CR and Kibana becomes available again.

  • The Logging status is healthy:

    • All pods have a ready status.

    • The Elasticsearch cluster is healthy.

  • Your Elasticsearch and Kibana data is backed up.

  • You have administrator permissions.

  • You have installed the OpenShift CLI (oc) for the verification steps.

Procedure
  1. In the OpenShift Container Platform web console, click OperatorsInstalled Operators.

  2. Select the openshift-operators-redhat project.

  3. Click OpenShift Elasticsearch Operator.

  4. Click SubscriptionChannel.

  5. In the Change Subscription Update Channel window, select stable-5.y and click Save. Note the elasticsearch-operator.v5.y.z version.

  6. Wait for a few seconds, then click OperatorsInstalled Operators. Verify that the OpenShift Elasticsearch Operator version matches the latest elasticsearch-operator.v5.y.z version.

  7. On the OperatorsInstalled Operators page, wait for the Status field to report Succeeded.

Verification
  1. Verify that all Elasticsearch pods have a Ready status by entering the following command and observing the output:

    $ oc get pod -n openshift-logging --selector component=elasticsearch
    Example output
    NAME                                            READY   STATUS    RESTARTS   AGE
    elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk   2/2     Running   0          31m
    elasticsearch-cdm-1pbrl44l-2-5c6d87589f-gx5hk   2/2     Running   0          30m
    elasticsearch-cdm-1pbrl44l-3-88df5d47-m45jc     2/2     Running   0          29m
  2. Verify that the Elasticsearch cluster status is green by entering the following command and observing the output:

    $ oc exec -n openshift-logging -c elasticsearch elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk -- health
    Example output
    {
      "cluster_name" : "elasticsearch",
      "status" : "green",
    }
  3. Verify that the Elasticsearch cron jobs are created by entering the following commands and observing the output:

    $ oc project openshift-logging
    $ oc get cronjob
    Example output
    NAME                     SCHEDULE       SUSPEND   ACTIVE   LAST SCHEDULE   AGE
    elasticsearch-im-app     */15 * * * *   False     0        <none>          56s
    elasticsearch-im-audit   */15 * * * *   False     0        <none>          56s
    elasticsearch-im-infra   */15 * * * *   False     0        <none>          56s
  4. Verify that the log store is updated to the correct version and the indices are green by entering the following command and observing the output:

    $ oc exec -c elasticsearch <any_es_pod_in_the_cluster> -- indices

    Verify that the output includes the app-00000x, infra-00000x, audit-00000x, .security indices:

    Sample output with indices in a green status
    Tue Jun 30 14:30:54 UTC 2020
    health status index                                                                 uuid                   pri rep docs.count docs.deleted store.size pri.store.size
    green  open   infra-000008                                                          bnBvUFEXTWi92z3zWAzieQ   3 1       222195            0        289            144
    green  open   infra-000004                                                          rtDSzoqsSl6saisSK7Au1Q   3 1       226717            0        297            148
    green  open   infra-000012                                                          RSf_kUwDSR2xEuKRZMPqZQ   3 1       227623            0        295            147
    green  open   .kibana_7                                                             1SJdCqlZTPWlIAaOUd78yg   1 1            4            0          0              0
    green  open   infra-000010                                                          iXwL3bnqTuGEABbUDa6OVw   3 1       248368            0        317            158
    green  open   infra-000009                                                          YN9EsULWSNaxWeeNvOs0RA   3 1       258799            0        337            168
    green  open   infra-000014                                                          YP0U6R7FQ_GVQVQZ6Yh9Ig   3 1       223788            0        292            146
    green  open   infra-000015                                                          JRBbAbEmSMqK5X40df9HbQ   3 1       224371            0        291            145
    green  open   .orphaned.2020.06.30                                                  n_xQC2dWQzConkvQqei3YA   3 1            9            0          0              0
    green  open   infra-000007                                                          llkkAVSzSOmosWTSAJM_hg   3 1       228584            0        296            148
    green  open   infra-000005                                                          d9BoGQdiQASsS3BBFm2iRA   3 1       227987            0        297            148
    green  open   infra-000003                                                          1-goREK1QUKlQPAIVkWVaQ   3 1       226719            0        295            147
    green  open   .security                                                             zeT65uOuRTKZMjg_bbUc1g   1 1            5            0          0              0
    green  open   .kibana-377444158_kubeadmin                                           wvMhDwJkR-mRZQO84K0gUQ   3 1            1            0          0              0
    green  open   infra-000006                                                          5H-KBSXGQKiO7hdapDE23g   3 1       226676            0        295            147
    green  open   infra-000001                                                          eH53BQ-bSxSWR5xYZB6lVg   3 1       341800            0        443            220
    green  open   .kibana-6                                                             RVp7TemSSemGJcsSUmuf3A   1 1            4            0          0              0
    green  open   infra-000011                                                          J7XWBauWSTe0jnzX02fU6A   3 1       226100            0        293            146
    green  open   app-000001                                                            axSAFfONQDmKwatkjPXdtw   3 1       103186            0        126             57
    green  open   infra-000016                                                          m9c1iRLtStWSF1GopaRyCg   3 1        13685            0         19              9
    green  open   infra-000002                                                          Hz6WvINtTvKcQzw-ewmbYg   3 1       228994            0        296            148
    green  open   infra-000013                                                          KR9mMFUpQl-jraYtanyIGw   3 1       228166            0        298            148
    green  open   audit-000001                                                          eERqLdLmQOiQDFES1LBATQ   3 1            0            0          0              0
  5. Verify that the log visualizer is updated to the correct version by entering the following command and observing the output:

    $ oc get kibana kibana -o json

    Verify that the output includes a Kibana pod with the ready status:

    Sample output with a ready Kibana pod
    [
    {
    "clusterCondition": {
    "kibana-5fdd766ffd-nb2jj": [
    {
    "lastTransitionTime": "2020-06-30T14:11:07Z",
    "reason": "ContainerCreating",
    "status": "True",
    "type": ""
    },
    {
    "lastTransitionTime": "2020-06-30T14:11:07Z",
    "reason": "ContainerCreating",
    "status": "True",
    "type": ""
    }
    ]
    },
    "deployment": "kibana",
    "pods": {
    "failed": [],
    "notReady": []
    "ready": []
    },
    "replicaSets": [
    "kibana-5fdd766ffd"
    ],
    "replicas": 1
    }
    ]