Installing the Cluster Logging and Elasticsearch Operators

You can use the OpenShift Dedicated console to install cluster logging, by deploying, the Cluster Logging and Elasticsearch Operators. The Cluster Logging Operator creates and manages the components of the logging stack. The Elasticsearch Operator creates and manages the Elasticsearch cluster used by cluster logging.

The OpenShift Dedicated cluster logging solution requires that you install both the Cluster Logging Operator and Elasticsearch Operator.

Prerequisites

Ensure that you have the necessary persistent storage for Elasticsearch. Note that each Elasticsearch node requires its own storage volume.

Elasticsearch is a memory-intensive application. Each Elasticsearch node needs 16G of memory for both memory requests and limits. The initial set of OpenShift Dedicated nodes might not be large enough to support the Elasticsearch cluster. You must add additional nodes to the OpenShift Dedicated cluster to run with the recommended or higher memory. Each Elasticsearch node can operate with a lower memory setting though this is not recommended for production deployments.

Procedure
  1. Install the Elasticsearch Operator via the OperatorHub. See: Adding Operators to a cluster.

  2. Install the Cluster Logging Operator using the OpenShift Dedicated web console for best results:

    1. In the OpenShift Dedicated web console, click OperatorsOperatorHub.

    2. Choose Cluster Logging from the list of available Operators, and click Install.

    3. On the Create Operator Subscription page, under A specific namespace on the cluster select openshift-logging. Then, click Subscribe.

  3. Verify the operator installations:

    1. Switch to the OperatorsInstalled Operators page.

    2. Ensure that Cluster Logging is listed in the openshift-logging project with a Status of InstallSucceeded.

    3. Ensure that Elasticsearch Operator is listed in the openshift-operator-redhat project with a Status of InstallSucceeded. The Elasticsearch Operator is copied to all other projects.

      During installation an operator might display a Failed status. If the operator then installs with an InstallSucceeded message, you can safely ignore the Failed message.

      If either operator does not appear as installed, to troubleshoot further:

      • Switch to the OperatorsInstalled Operators page and inspect the Status column for any errors or failures.

      • Switch to the WorkloadsPods page and check the logs in any Pods in the openshift-logging and openshift-operators-redhat projects that are reporting issues.

  4. Create a cluster logging instance:

    1. Switch to the AdministrationCustom Resource Definitions page.

    2. On the Custom Resource Definitions page, click ClusterLogging.

    3. On the Custom Resource Definition Overview page, select View Instances from the Actions menu.

    4. On the Cluster Loggings page, click Create Cluster Logging.

      You might have to refresh the page to load the data.

    5. In the YAML, replace the code with the following:

      This default cluster logging configuration should support a wide array of environments. Review the topics on tuning and configuring the cluster logging components for information on modifications you can make to your cluster logging cluster.

      apiVersion: "logging.openshift.io/v1"
      kind: "ClusterLogging"
      metadata:
        name: "instance"
        namespace: "openshift-logging"
      spec:
        managementState: "Managed"
        logStore:
          type: "elasticsearch"
          elasticsearch:
            nodeCount: 3
            storage:
              storageClassName: "gp2"
              size: "200Gi"
            redundancyPolicy: "SingleRedundancy"
            nodeSelector:
              node-role.kubernetes.io/worker: ""
            resources:
              request:
                memory: 8G
        visualization:
          type: "kibana"
          kibana:
            replicas: 1
            nodeSelector:
              node-role.kubernetes.io/worker: ""
        curation:
          type: "curator"
          curator:
            schedule: "30 3 * * *"
            nodeSelector:
              node-role.kubernetes.io/worker: ""
        collection:
          logs:
            type: "fluentd"
            fluentd: {}
            nodeSelector:
              node-role.kubernetes.io/worker: ""
    6. Click Create. This creates the Cluster Logging Custom Resource and Elasticsearch Custom Resource, which you can edit to make changes to your cluster logging cluster.

  5. Verify the install:

    1. Switch to the WorkloadsPods page.

    2. Select the openshift-logging project.

      You should see several pods for cluster logging, Elasticsearch, Fluentd, and Kibana similar to the following list:

      • cluster-logging-operator-cb795f8dc-xkckc

      • elasticsearch-cdm-b3nqzchd-1-5c6797-67kfz

      • elasticsearch-cdm-b3nqzchd-2-6657f4-wtprv

      • elasticsearch-cdm-b3nqzchd-3-588c65-clg7g

      • fluentd-2c7dg

      • fluentd-9z7kk

      • fluentd-br7r2

      • fluentd-fn2sb

      • fluentd-pb2f8

      • fluentd-zqgqx

      • kibana-7fb4fd4cc9-bvt4p