×

Logging v6.0 is a significant upgrade from previous releases, achieving several longstanding goals of Cluster Logging:

  • Introduction of distinct operators to manage logging components (e.g., collectors, storage, visualization).

  • Removal of support for managed log storage and visualization based on Elastic products (i.e., Elasticsearch, Kibana).

  • Deprecation of the Fluentd log collector implementation.

  • Removal of support for ClusterLogging.logging.openshift.io and ClusterLogForwarder.logging.openshift.io resources.

The cluster-logging-operator does not provide an automated upgrade process.

Given the various configurations for log collection, forwarding, and storage, no automated upgrade is provided by the cluster-logging-operator. This documentation assists administrators in converting existing ClusterLogging.logging.openshift.io and ClusterLogForwarder.logging.openshift.io specifications to the new API. Examples of migrated ClusterLogForwarder.observability.openshift.io resources for common use cases are included.

Using the oc explain command

The oc explain command is an essential tool in the OpenShift CLI oc that provides detailed descriptions of the fields within Custom Resources (CRs). This command is invaluable for administrators and developers who are configuring or troubleshooting resources in an OpenShift cluster.

Resource Descriptions

oc explain offers in-depth explanations of all fields associated with a specific object. This includes standard resources like pods and services, as well as more complex entities like statefulsets and custom resources defined by Operators.

To view the documentation for the outputs field of the ClusterLogForwarder custom resource, you can use:

$ oc explain clusterlogforwarders.observability.openshift.io.spec.outputs

In place of clusterlogforwarder the short form obsclf can be used.

This will display detailed information about these fields, including their types, default values, and any associated sub-fields.

Hierarchical Structure

The command displays the structure of resource fields in a hierarchical format, clarifying the relationships between different configuration options.

For instance, here’s how you can drill down into the storage configuration for a LokiStack custom resource:

$ oc explain lokistacks.loki.grafana.com
$ oc explain lokistacks.loki.grafana.com.spec
$ oc explain lokistacks.loki.grafana.com.spec.storage
$ oc explain lokistacks.loki.grafana.com.spec.storage.schemas

Each command reveals a deeper level of the resource specification, making the structure clear.

Type Information

oc explain also indicates the type of each field (such as string, integer, or boolean), allowing you to verify that resource definitions use the correct data types.

For example:

$ oc explain lokistacks.loki.grafana.com.spec.size

This will show that size should be defined using an integer value.

Default Values

When applicable, the command shows the default values for fields, providing insights into what values will be used if none are explicitly specified.

Again using lokistacks.loki.grafana.com as an example:

$ oc explain lokistacks.spec.template.distributor.replicas
Example output
GROUP:      loki.grafana.com
KIND:       LokiStack
VERSION:    v1

FIELD: replicas <integer>

DESCRIPTION:
    Replicas defines the number of replica pods of the component.

Log Storage

The only managed log storage solution available in this release is a Lokistack, managed by the loki-operator. This solution, previously available as the preferred alternative to the managed Elasticsearch offering, remains unchanged in its deployment process.

To continue using an existing Red Hat managed Elasticsearch or Kibana deployment provided by the elasticsearch-operator, remove the owner references from the Elasticsearch resource named elasticsearch, and the Kibana resource named kibana in the openshift-logging namespace before removing the ClusterLogging resource named instance in the same namespace.

  1. Temporarily set ClusterLogging to state Unmanaged

    $ oc -n openshift-logging patch clusterlogging/instance -p '{"spec":{"managementState": "Unmanaged"}}' --type=merge
  2. Remove ClusterLogging ownerReferences from the Elasticsearch resource

    The following command ensures that ClusterLogging no longer owns the Elasticsearch resource. Updates to the ClusterLogging resource’s logStore field will no longer affect the Elasticsearch resource.

    $ oc -n openshift-logging patch elasticsearch/elasticsearch -p '{"metadata":{"ownerReferences": []}}' --type=merge
  3. Remove ClusterLogging ownerReferences from the Kibana resource

    The following command ensures that ClusterLogging no longer owns the Kibana resource. Updates to the ClusterLogging resource’s visualization field will no longer affect the Kibana resource.

    $ oc -n openshift-logging patch kibana/kibana -p '{"metadata":{"ownerReferences": []}}' --type=merge
  4. Set ClusterLogging to state Managed

$ oc -n openshift-logging patch clusterlogging/instance -p '{"spec":{"managementState": "Managed"}}' --type=merge

Log Visualization

The OpenShift console UI plugin for log visualization has been moved to the cluster-observability-operator from the cluster-logging-operator.

Log Collection and Forwarding

Log collection and forwarding configurations are now specified under the new API, part of the observability.openshift.io API group. The following sections highlight the differences from the old API resources.

Vector is the only supported collector implementation.

Management, Resource Allocation, and Workload Scheduling

Configuration for management state (e.g., Managed, Unmanaged), resource requests and limits, tolerations, and node selection is now part of the new ClusterLogForwarder API.

Previous Configuration
apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogging"
spec:
  managementState: "Managed"
  collection:
    resources:
      limits: {}
      requests: {}
    nodeSelector: {}
    tolerations: {}
Current Configuration
apiVersion: "observability.openshift.io/v1"
kind: ClusterLogForwarder
spec:
  managementState: Managed
  collector:
    resources:
      limits: {}
      requests: {}
    nodeSelector: {}
    tolerations: {}

Input Specifications

The input specification is an optional part of the ClusterLogForwarder specification. Administrators can continue to use the predefined values of application, infrastructure, and audit to collect these sources.

Application Inputs

Namespace and container inclusions and exclusions have been consolidated into a single field.

5.9 Application Input with Namespace and Container Includes and Excludes
apiVersion: "logging.openshift.io/v1"
kind: ClusterLogForwarder
spec:
  inputs:
   - name: application-logs
     type: application
     application:
       namespaces:
       - foo
       - bar
       includes:
       - namespace: my-important
         container: main
       excludes:
       - container: too-verbose
6.0 Application Input with Namespace and Container Includes and Excludes
apiVersion: "observability.openshift.io/v1"
kind: ClusterLogForwarder
spec:
  inputs:
   - name: application-logs
     type: application
     application:
       includes:
       - namespace: foo
       - namespace: bar
       - namespace: my-important
         container: main
       excludes:
       - container: too-verbose

application, infrastructure, and audit are reserved words and cannot be used as names when defining an input.

Input Receivers

Changes to input receivers include:

  • Explicit configuration of the type at the receiver level.

  • Port settings moved to the receiver level.

5.9 Input Receivers
apiVersion: "logging.openshift.io/v1"
kind: ClusterLogForwarder
spec:
  inputs:
  - name: an-http
    receiver:
      http:
        port: 8443
        format: kubeAPIAudit
  - name: a-syslog
    receiver:
      type: syslog
      syslog:
        port: 9442
6.0 Input Receivers
apiVersion: "observability.openshift.io/v1"
kind: ClusterLogForwarder
spec:
  inputs:
  - name: an-http
    type: receiver
    receiver:
      type: http
      port: 8443
      http:
        format: kubeAPIAudit
  - name: a-syslog
    type: receiver
    receiver:
      type: syslog
      port: 9442

Output Specifications

High-level changes to output specifications include:

  • URL settings moved to each output type specification.

  • Tuning parameters moved to each output type specification.

  • Separation of TLS configuration from authentication.

  • Explicit configuration of keys and secret/configmap for TLS and authentication.

Secrets and TLS Configuration

Secrets and TLS configurations are now separated into authentication and TLS configuration for each output. They must be explicitly defined in the specification rather than relying on administrators to define secrets with recognized keys. Upgrading TLS and authorization configurations requires administrators to understand previously recognized keys to continue using existing secrets. Examples in the following sections provide details on how to configure ClusterLogForwarder secrets to forward to existing Red Hat managed log storage solutions.

Red Hat Managed Elasticsearch

v5.9 Forwarding to Red Hat Managed Elasticsearch
apiVersion: logging.openshift.io/v1
kind: ClusterLogging
metadata:
  name: instance
  namespace: openshift-logging
spec:
  logStore:
    type: elasticsearch
v6.0 Forwarding to Red Hat Managed Elasticsearch
apiVersion: observability.openshift.io/v1
kind: ClusterLogForwarder
metadata:
  name: instance
  namespace: openshift-logging
spec:
  outputs:
  - name: default-elasticsearch
    type: elasticsearch
    elasticsearch:
      url: https://elasticsearch:9200
      version: 6
      index: <log_type>-write-{+yyyy.MM.dd}
    tls:
      ca:
        key: ca-bundle.crt
        secretName: collector
      certificate:
        key: tls.crt
        secretName: collector
      key:
        key: tls.key
        secretName: collector
  pipelines:
  - outputRefs:
    - default-elasticsearch
  - inputRefs:
    - application
    - infrastructure

In this example, application logs are written to the application-write alias/index instead of app-write.

Red Hat Managed LokiStack

v5.9 Forwarding to Red Hat Managed LokiStack
apiVersion: logging.openshift.io/v1
kind: ClusterLogging
metadata:
  name: instance
  namespace: openshift-logging
spec:
  logStore:
    type: lokistack
    lokistack:
      name: lokistack-dev
v6.0 Forwarding to Red Hat Managed LokiStack
apiVersion: observability.openshift.io/v1
kind: ClusterLogForwarder
metadata:
  name: instance
  namespace: openshift-logging
spec:
  outputs:
  - name: default-lokistack
    type: lokiStack
    lokiStack:
      target:
        name: lokistack-dev
        namespace: openshift-logging
      authentication:
        token:
          from: serviceAccount
    tls:
      ca:
        key: service-ca.crt
        configMapName: openshift-service-ca.crt
  pipelines:
  - outputRefs:
    - default-lokistack
  - inputRefs:
    - application
    - infrastructure

Filters and Pipeline Configuration

Pipeline configurations now define only the routing of input sources to their output destinations, with any required transformations configured separately as filters. All attributes of pipelines from previous releases have been converted to filters in this release. Individual filters are defined in the filters specification and referenced by a pipeline.

5.9 Filters
apiVersion: logging.openshift.io/v1
kind: ClusterLogForwarder
spec:
  pipelines:
   - name: application-logs
     parse: json
     labels:
       foo: bar
     detectMultilineErrors: true
6.0 Filter Configuration
apiVersion: observability.openshift.io/v1
kind: ClusterLogForwarder
spec:
  filters:
  - name: detectexception
    type: detectMultilineException
  - name: parse-json
    type: parse
  - name: labels
    type: openShiftLabels
    openShiftLabels:
      foo: bar
  pipelines:
  - name: application-logs
    filterRefs:
    - detectexception
    - labels
    - parse-json

Validation and Status

Most validations are enforced when a resource is created or updated, providing immediate feedback. This is a departure from previous releases, where validation occurred post-creation and required inspecting the resource status. Some validation still occurs post-creation for cases where it is not possible to validate at creation or update time.

Instances of the ClusterLogForwarder.observability.openshift.io must satisfy the following conditions before the operator will deploy the log collector: Authorized, Valid, Ready. An example of these conditions is:

6.0 Status Conditions
apiVersion: observability.openshift.io/v1
kind: ClusterLogForwarder
status:
  conditions:
  - lastTransitionTime: "2024-09-13T03:28:44Z"
    message: 'permitted to collect log types: [application]'
    reason: ClusterRolesExist
    status: "True"
    type: observability.openshift.io/Authorized
  - lastTransitionTime: "2024-09-13T12:16:45Z"
    message: ""
    reason: ValidationSuccess
    status: "True"
    type: observability.openshift.io/Valid
  - lastTransitionTime: "2024-09-13T12:16:45Z"
    message: ""
    reason: ReconciliationComplete
    status: "True"
    type: Ready
  filterConditions:
  - lastTransitionTime: "2024-09-13T13:02:59Z"
    message: filter "detectexception" is valid
    reason: ValidationSuccess
    status: "True"
    type: observability.openshift.io/ValidFilter-detectexception
  - lastTransitionTime: "2024-09-13T13:02:59Z"
    message: filter "parse-json" is valid
    reason: ValidationSuccess
    status: "True"
    type: observability.openshift.io/ValidFilter-parse-json
  inputConditions:
  - lastTransitionTime: "2024-09-13T12:23:03Z"
    message: input "application1" is valid
    reason: ValidationSuccess
    status: "True"
    type: observability.openshift.io/ValidInput-application1
  outputConditions:
  - lastTransitionTime: "2024-09-13T13:02:59Z"
    message: output "default-lokistack-application1" is valid
    reason: ValidationSuccess
    status: "True"
    type: observability.openshift.io/ValidOutput-default-lokistack-application1
  pipelineConditions:
  - lastTransitionTime: "2024-09-13T03:28:44Z"
    message: pipeline "default-before" is valid
    reason: ValidationSuccess
    status: "True"
    type: observability.openshift.io/ValidPipeline-default-before

Conditions that are satisfied and applicable have a "status" value of "True". Conditions with a status other than "True" provide a reason and a message explaining the issue.