$ oc explain clusterlogforwarders.observability.openshift.io.spec.outputs
oc explain
command
Logging v6.0 is a significant upgrade from previous releases, achieving several longstanding goals of Cluster Logging:
Introduction of distinct operators to manage logging components (e.g., collectors, storage, visualization).
Removal of support for managed log storage and visualization based on Elastic products (i.e., Elasticsearch, Kibana).
Deprecation of the Fluentd log collector implementation.
Removal of support for ClusterLogging.logging.openshift.io
and ClusterLogForwarder.logging.openshift.io
resources.
The cluster-logging-operator does not provide an automated upgrade process. |
Given the various configurations for log collection, forwarding, and storage, no automated upgrade is provided by the cluster-logging-operator. This documentation assists administrators in converting existing ClusterLogging.logging.openshift.io
and ClusterLogForwarder.logging.openshift.io
specifications to the new API. Examples of migrated ClusterLogForwarder.observability.openshift.io
resources for common use cases are included.
oc explain
commandThe oc explain
command is an essential tool in the OpenShift CLI oc
that provides detailed descriptions of the fields within Custom Resources (CRs). This command is invaluable for administrators and developers who are configuring or troubleshooting resources in an OpenShift cluster.
oc explain
offers in-depth explanations of all fields associated with a specific object. This includes standard resources like pods and services, as well as more complex entities like statefulsets and custom resources defined by Operators.
To view the documentation for the outputs
field of the ClusterLogForwarder
custom resource, you can use:
$ oc explain clusterlogforwarders.observability.openshift.io.spec.outputs
In place of |
This will display detailed information about these fields, including their types, default values, and any associated sub-fields.
The command displays the structure of resource fields in a hierarchical format, clarifying the relationships between different configuration options.
For instance, here’s how you can drill down into the storage
configuration for a LokiStack
custom resource:
$ oc explain lokistacks.loki.grafana.com
$ oc explain lokistacks.loki.grafana.com.spec
$ oc explain lokistacks.loki.grafana.com.spec.storage
$ oc explain lokistacks.loki.grafana.com.spec.storage.schemas
Each command reveals a deeper level of the resource specification, making the structure clear.
oc explain
also indicates the type of each field (such as string, integer, or boolean), allowing you to verify that resource definitions use the correct data types.
For example:
$ oc explain lokistacks.loki.grafana.com.spec.size
This will show that size
should be defined using an integer value.
When applicable, the command shows the default values for fields, providing insights into what values will be used if none are explicitly specified.
Again using lokistacks.loki.grafana.com
as an example:
$ oc explain lokistacks.spec.template.distributor.replicas
GROUP: loki.grafana.com
KIND: LokiStack
VERSION: v1
FIELD: replicas <integer>
DESCRIPTION:
Replicas defines the number of replica pods of the component.
The only managed log storage solution available in this release is a Lokistack, managed by the loki-operator. This solution, previously available as the preferred alternative to the managed Elasticsearch offering, remains unchanged in its deployment process.
To continue using an existing Red Hat managed Elasticsearch or Kibana deployment provided by the elasticsearch-operator, remove the owner references from the |
Temporarily set ClusterLogging to state Unmanaged
$ oc -n openshift-logging patch clusterlogging/instance -p '{"spec":{"managementState": "Unmanaged"}}' --type=merge
Remove ClusterLogging ownerReferences
from the Elasticsearch resource
The following command ensures that ClusterLogging no longer owns the Elasticsearch resource. Updates to the ClusterLogging resource’s logStore
field will no longer affect the Elasticsearch resource.
$ oc -n openshift-logging patch elasticsearch/elasticsearch -p '{"metadata":{"ownerReferences": []}}' --type=merge
Remove ClusterLogging ownerReferences
from the Kibana resource
The following command ensures that ClusterLogging no longer owns the Kibana resource. Updates to the ClusterLogging resource’s visualization
field will no longer affect the Kibana resource.
$ oc -n openshift-logging patch kibana/kibana -p '{"metadata":{"ownerReferences": []}}' --type=merge
Set ClusterLogging to state Managed
$ oc -n openshift-logging patch clusterlogging/instance -p '{"spec":{"managementState": "Managed"}}' --type=merge
The OpenShift console UI plugin for log visualization has been moved to the cluster-observability-operator from the cluster-logging-operator.
Log collection and forwarding configurations are now specified under the new API, part of the observability.openshift.io
API group. The following sections highlight the differences from the old API resources.
Vector is the only supported collector implementation. |
Configuration for management state (e.g., Managed, Unmanaged), resource requests and limits, tolerations, and node selection is now part of the new ClusterLogForwarder API.
apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogging"
spec:
managementState: "Managed"
collection:
resources:
limits: {}
requests: {}
nodeSelector: {}
tolerations: {}
apiVersion: "observability.openshift.io/v1"
kind: ClusterLogForwarder
spec:
managementState: Managed
collector:
resources:
limits: {}
requests: {}
nodeSelector: {}
tolerations: {}
The input specification is an optional part of the ClusterLogForwarder specification. Administrators can continue to use the predefined values of application, infrastructure, and audit to collect these sources.
Namespace and container inclusions and exclusions have been consolidated into a single field.
apiVersion: "logging.openshift.io/v1"
kind: ClusterLogForwarder
spec:
inputs:
- name: application-logs
type: application
application:
namespaces:
- foo
- bar
includes:
- namespace: my-important
container: main
excludes:
- container: too-verbose
apiVersion: "observability.openshift.io/v1"
kind: ClusterLogForwarder
spec:
inputs:
- name: application-logs
type: application
application:
includes:
- namespace: foo
- namespace: bar
- namespace: my-important
container: main
excludes:
- container: too-verbose
application, infrastructure, and audit are reserved words and cannot be used as names when defining an input. |
Changes to input receivers include:
Explicit configuration of the type at the receiver level.
Port settings moved to the receiver level.
apiVersion: "logging.openshift.io/v1"
kind: ClusterLogForwarder
spec:
inputs:
- name: an-http
receiver:
http:
port: 8443
format: kubeAPIAudit
- name: a-syslog
receiver:
type: syslog
syslog:
port: 9442
apiVersion: "observability.openshift.io/v1"
kind: ClusterLogForwarder
spec:
inputs:
- name: an-http
type: receiver
receiver:
type: http
port: 8443
http:
format: kubeAPIAudit
- name: a-syslog
type: receiver
receiver:
type: syslog
port: 9442
High-level changes to output specifications include:
URL settings moved to each output type specification.
Tuning parameters moved to each output type specification.
Separation of TLS configuration from authentication.
Explicit configuration of keys and secret/configmap for TLS and authentication.
Secrets and TLS configurations are now separated into authentication and TLS configuration for each output. They must be explicitly defined in the specification rather than relying on administrators to define secrets with recognized keys. Upgrading TLS and authorization configurations requires administrators to understand previously recognized keys to continue using existing secrets. Examples in the following sections provide details on how to configure ClusterLogForwarder secrets to forward to existing Red Hat managed log storage solutions.
apiVersion: logging.openshift.io/v1
kind: ClusterLogging
metadata:
name: instance
namespace: openshift-logging
spec:
logStore:
type: elasticsearch
apiVersion: observability.openshift.io/v1
kind: ClusterLogForwarder
metadata:
name: instance
namespace: openshift-logging
spec:
outputs:
- name: default-elasticsearch
type: elasticsearch
elasticsearch:
url: https://elasticsearch:9200
version: 6
index: <log_type>-write-{+yyyy.MM.dd}
tls:
ca:
key: ca-bundle.crt
secretName: collector
certificate:
key: tls.crt
secretName: collector
key:
key: tls.key
secretName: collector
pipelines:
- outputRefs:
- default-elasticsearch
- inputRefs:
- application
- infrastructure
In this example, application logs are written to the |
apiVersion: logging.openshift.io/v1
kind: ClusterLogging
metadata:
name: instance
namespace: openshift-logging
spec:
logStore:
type: lokistack
lokistack:
name: lokistack-dev
apiVersion: observability.openshift.io/v1
kind: ClusterLogForwarder
metadata:
name: instance
namespace: openshift-logging
spec:
outputs:
- name: default-lokistack
type: lokiStack
lokiStack:
target:
name: lokistack-dev
namespace: openshift-logging
authentication:
token:
from: serviceAccount
tls:
ca:
key: service-ca.crt
configMapName: openshift-service-ca.crt
pipelines:
- outputRefs:
- default-lokistack
- inputRefs:
- application
- infrastructure
Pipeline configurations now define only the routing of input sources to their output destinations, with any required transformations configured separately as filters. All attributes of pipelines from previous releases have been converted to filters in this release. Individual filters are defined in the filters
specification and referenced by a pipeline.
apiVersion: logging.openshift.io/v1
kind: ClusterLogForwarder
spec:
pipelines:
- name: application-logs
parse: json
labels:
foo: bar
detectMultilineErrors: true
apiVersion: observability.openshift.io/v1
kind: ClusterLogForwarder
spec:
filters:
- name: detectexception
type: detectMultilineException
- name: parse-json
type: parse
- name: labels
type: openshiftLabels
openshiftLabels:
foo: bar
pipelines:
- name: application-logs
filterRefs:
- detectexception
- labels
- parse-json
Most validations are enforced when a resource is created or updated, providing immediate feedback. This is a departure from previous releases, where validation occurred post-creation and required inspecting the resource status. Some validation still occurs post-creation for cases where it is not possible to validate at creation or update time.
Instances of the ClusterLogForwarder.observability.openshift.io
must satisfy the following conditions before the operator will deploy the log collector: Authorized, Valid, Ready. An example of these conditions is:
apiVersion: observability.openshift.io/v1
kind: ClusterLogForwarder
status:
conditions:
- lastTransitionTime: "2024-09-13T03:28:44Z"
message: 'permitted to collect log types: [application]'
reason: ClusterRolesExist
status: "True"
type: observability.openshift.io/Authorized
- lastTransitionTime: "2024-09-13T12:16:45Z"
message: ""
reason: ValidationSuccess
status: "True"
type: observability.openshift.io/Valid
- lastTransitionTime: "2024-09-13T12:16:45Z"
message: ""
reason: ReconciliationComplete
status: "True"
type: Ready
filterConditions:
- lastTransitionTime: "2024-09-13T13:02:59Z"
message: filter "detectexception" is valid
reason: ValidationSuccess
status: "True"
type: observability.openshift.io/ValidFilter-detectexception
- lastTransitionTime: "2024-09-13T13:02:59Z"
message: filter "parse-json" is valid
reason: ValidationSuccess
status: "True"
type: observability.openshift.io/ValidFilter-parse-json
inputConditions:
- lastTransitionTime: "2024-09-13T12:23:03Z"
message: input "application1" is valid
reason: ValidationSuccess
status: "True"
type: observability.openshift.io/ValidInput-application1
outputConditions:
- lastTransitionTime: "2024-09-13T13:02:59Z"
message: output "default-lokistack-application1" is valid
reason: ValidationSuccess
status: "True"
type: observability.openshift.io/ValidOutput-default-lokistack-application1
pipelineConditions:
- lastTransitionTime: "2024-09-13T03:28:44Z"
message: pipeline "default-before" is valid
reason: ValidationSuccess
status: "True"
type: observability.openshift.io/ValidPipeline-default-before
Conditions that are satisfied and applicable have a "status" value of "True". Conditions with a status other than "True" provide a reason and a message explaining the issue. |