$ oc get pods --selector component=fluentd -o wide -n openshift-logging
OpenShift Container Platform uses Fluentd to collect operations and application logs from your cluster and enriches the data with Kubernetes pod and project metadata.
You can configure the CPU and memory limits for the log collector and move the log collector pods to specific nodes. All supported modifications to the log collector can be performed though the spec.collection.log.fluentd
stanza in the ClusterLogging
custom resource (CR).
The supported way of configuring cluster logging is by configuring it using the options described in this documentation. Do not use other configurations, as they are unsupported. Configuration paradigms might change across OpenShift Container Platform releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in this documentation, your changes will disappear because the Elasticsearch Operator and Cluster Logging Operator reconcile any differences. The Operators reverse everything to the defined state by default and by design.
If you must perform configurations not described in the OpenShift Container Platform documentation, you must set your Cluster Logging Operator or Elasticsearch Operator to Unmanaged. An unmanaged cluster logging environment is not supported and does not receive updates until you return cluster logging to Managed. |
You can use the oc get pods --all-namespaces -o wide
command to see the nodes where the Fluentd are deployed.
Run the following command in the openshift-logging
project:
$ oc get pods --selector component=fluentd -o wide -n openshift-logging
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
fluentd-8d69v 1/1 Running 0 134m 10.130.2.30 master1.example.com <none> <none>
fluentd-bd225 1/1 Running 0 134m 10.131.1.11 master2.example.com <none> <none>
fluentd-cvrzs 1/1 Running 0 134m 10.130.0.21 master3.example.com <none> <none>
fluentd-gpqg2 1/1 Running 0 134m 10.128.2.27 worker1.example.com <none> <none>
fluentd-l9j7j 1/1 Running 0 134m 10.129.2.31 worker2.example.com <none> <none>
The log collector allows for adjustments to both the CPU and memory limits.
Edit the ClusterLogging
custom resource (CR) in the openshift-logging
project:
$ oc edit ClusterLogging instance
$ oc edit ClusterLogging instance
apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogging"
metadata:
name: "instance"
....
spec:
collection:
logs:
fluentd:
resources:
limits: (1)
memory: 736Mi
requests:
cpu: 100m
memory: 736Mi
1 | Specify the CPU and memory limits and requests as needed. The values shown are the default values. |
The following alerts are generated by the logging collector. You can view these alerts in the OpenShift Container Platform web console, on the Alerts page of the Alerting UI.
Alert | Message | Description | Severity |
---|---|---|---|
|
|
Fluentd is reporting a higher number of issues than the specified number, default 10. |
Critical |
|
|
Fluentd is reporting that Prometheus could not scrape a specific Fluentd instance. |
Critical |
|
|
Fluentd is reporting that it is overwhelmed. |
Warning |
|
|
Fluentd is reporting queue usage issues. |
Critical |
As an administrator, in the rare case that you forward logs to a third-party log store and do not use the default Elasticsearch log store, you can remove several unused components from your logging cluster.
In other words, if you do not use the default Elasticsearch log store, you can remove the internal Elasticsearch logStore
, Kibana visualization
, and log curation
components from the ClusterLogging
custom resource (CR). Removing these components is optional but saves resources.
Verify that your log forwarder does not send log data to the default internal Elasticsearch cluster. Inspect the ClusterLogForwarder
CR YAML file that you used to configure log forwarding. Verify that it does not have an outputRefs
element that specifies default
. For example:
outputRefs:
- default
Suppose the |
Edit the ClusterLogging
custom resource (CR) in the openshift-logging
project:
$ oc edit ClusterLogging instance
If they are present, remove the logStore
, visualization
, curation
stanzas from the ClusterLogging
CR.
Preserve the collection
stanza of the ClusterLogging
CR. The result should look similar to the following example:
apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogging"
metadata:
name: "instance"
namespace: "openshift-logging"
spec:
managementState: "Managed"
collection:
logs:
type: "fluentd"
fluentd: {}
Verify that the Fluentd pods are redeployed:
$ oc get pods -n openshift-logging