apiVersion: "logging.openshift.io/v1"
kind: ClusterLogging
metadata:
name: instance
namespace: openshift-logging
spec:
collection:
logs:
type: vector
vector: {}
# ...
The Red Hat OpenShift Logging Operator deploys a collector based on the ClusterLogForwarder
resource specification. There are two collector options supported by this Operator: the legacy Fluentd collector, and the Vector collector.
Fluentd is deprecated and is planned to be removed in a future release. Red Hat provides bug fixes and support for this feature during the current release lifecycle, but this feature no longer receives enhancements. As an alternative to Fluentd, you can use Vector instead. |
The log collector is a daemon set that deploys pods to each OpenShift Container Platform node to collect container and node logs.
By default, the log collector uses the following sources:
System and infrastructure logs generated by journald log messages from the operating system, the container runtime, and OpenShift Container Platform.
/var/log/containers/*.log
for all container logs.
If you configure the log collector to collect audit logs, it collects them from /var/log/audit/audit.log
.
The log collector collects the logs from these sources and forwards them internally or externally depending on your logging configuration.
Vector is a log collector offered as an alternative to Fluentd for the logging.
You can configure which logging collector type your cluster uses by modifying the ClusterLogging
custom resource (CR) collection
spec:
apiVersion: "logging.openshift.io/v1"
kind: ClusterLogging
metadata:
name: instance
namespace: openshift-logging
spec:
collection:
logs:
type: vector
vector: {}
# ...
The container runtimes provide minimal information to identify the source of log messages: project, pod name, and container ID. This information is not sufficient to uniquely identify the source of the logs. If a pod with a given name and project is deleted before the log collector begins processing its logs, information from the API server, such as labels and annotations, might not be available. There might not be a way to distinguish the log messages from a similarly named pod and project or trace the logs to their source. This limitation means that log collection and normalization are considered best effort.
The available container runtimes provide minimal information to identify the source of log messages and do not guarantee unique individual log messages or that these messages can be traced to their source. |
Feature | Fluentd | Vector |
---|---|---|
App container logs |
✓ |
✓ |
App-specific routing |
✓ |
✓ |
App-specific routing by namespace |
✓ |
✓ |
Infra container logs |
✓ |
✓ |
Infra journal logs |
✓ |
✓ |
Kube API audit logs |
✓ |
✓ |
OpenShift API audit logs |
✓ |
✓ |
Open Virtual Network (OVN) audit logs |
✓ |
✓ |
Feature | Fluentd | Vector |
---|---|---|
Elasticsearch certificates |
✓ |
✓ |
Elasticsearch username / password |
✓ |
✓ |
Cloudwatch keys |
✓ |
✓ |
Cloudwatch STS |
✓ |
✓ |
Kafka certificates |
✓ |
✓ |
Kafka username / password |
✓ |
✓ |
Kafka SASL |
✓ |
✓ |
Loki bearer token |
✓ |
✓ |
Feature | Fluentd | Vector |
---|---|---|
Viaq data model - app |
✓ |
✓ |
Viaq data model - infra |
✓ |
✓ |
Viaq data model - infra(journal) |
✓ |
✓ |
Viaq data model - Linux audit |
✓ |
✓ |
Viaq data model - kube-apiserver audit |
✓ |
✓ |
Viaq data model - OpenShift API audit |
✓ |
✓ |
Viaq data model - OVN |
✓ |
✓ |
Loglevel Normalization |
✓ |
✓ |
JSON parsing |
✓ |
✓ |
Structured Index |
✓ |
✓ |
Multiline error detection |
✓ |
✓ |
Multicontainer / split indices |
✓ |
✓ |
Flatten labels |
✓ |
✓ |
CLF static labels |
✓ |
✓ |
Feature | Fluentd | Vector |
---|---|---|
Fluentd readlinelimit |
✓ |
|
Fluentd buffer |
✓ |
|
- chunklimitsize |
✓ |
|
- totallimitsize |
✓ |
|
- overflowaction |
✓ |
|
- flushthreadcount |
✓ |
|
- flushmode |
✓ |
|
- flushinterval |
✓ |
|
- retrywait |
✓ |
|
- retrytype |
✓ |
|
- retrymaxinterval |
✓ |
|
- retrytimeout |
✓ |
Feature | Fluentd | Vector |
---|---|---|
Metrics |
✓ |
✓ |
Dashboard |
✓ |
✓ |
Alerts |
✓ |
✓ |
Feature | Fluentd | Vector |
---|---|---|
Global proxy support |
✓ |
✓ |
x86 support |
✓ |
✓ |
ARM support |
✓ |
✓ |
IBM Power support |
✓ |
✓ |
IBM Z support |
✓ |
✓ |
IPv6 support |
✓ |
✓ |
Log event buffering |
✓ |
|
Disconnected Cluster |
✓ |
✓ |
The following collector outputs are supported:
Feature | Fluentd | Vector |
---|---|---|
Elasticsearch v6-v8 |
✓ |
✓ |
Fluent forward |
✓ |
|
Syslog RFC3164 |
✓ |
✓ (Logging 5.7+) |
Syslog RFC5424 |
✓ |
✓ (Logging 5.7+) |
Kafka |
✓ |
✓ |
Cloudwatch |
✓ |
✓ |
Cloudwatch STS |
✓ |
✓ |
Loki |
✓ |
✓ |
HTTP |
✓ |
✓ (Logging 5.7+) |
Google Cloud Logging |
✓ |
✓ |
Splunk |
✓ (Logging 5.6+) |
Administrators can create ClusterLogForwarder
resources that specify which logs are collected, how they are transformed, and where they are forwarded to.
ClusterLogForwarder
resources can be used up to forward container, infrastructure, and audit logs to specific endpoints within or outside of a cluster. Transport Layer Security (TLS) is supported so that log forwarders can be configured to send logs securely.
Administrators can also authorize RBAC permissions that define which service accounts and users can access and forward which types of logs.
There are two log forwarding implementations available: the legacy implementation, and the multi log forwarder feature.
Only the Vector collector is supported for use with the multi log forwarder feature. The Fluentd collector can only be used with legacy implementations. |
In legacy implementations, you can only use one log forwarder in your cluster. The ClusterLogForwarder
resource in this mode must be named instance
, and must be created in the openshift-logging
namespace. The ClusterLogForwarder
resource also requires a corresponding ClusterLogging
resource named instance
in the openshift-logging
namespace.
The multi log forwarder feature is available in logging 5.8 and later, and provides the following functionality:
Administrators can control which users are allowed to define log collection and which logs they are allowed to collect.
Users who have the required permissions are able to specify additional log collection configurations.
Administrators who are migrating from the deprecated Fluentd collector to the Vector collector can deploy a new log forwarder separately from their existing deployment. The existing and new log forwarders can operate simultaneously while workloads are being migrated.
In multi log forwarder implementations, you are not required to create a corresponding ClusterLogging
resource for your ClusterLogForwarder
resource. You can create multiple ClusterLogForwarder
resources using any name, in any namespace, with the following exceptions:
You cannot create a ClusterLogForwarder
resource named instance
in the openshift-logging
namespace, because this is reserved for a log forwarder that supports the legacy workflow using the Fluentd collector.
You cannot create a ClusterLogForwarder
resource named collector
in the openshift-logging
namespace, because this is reserved for the collector.
To use the multi log forwarder feature, you must create a service account and cluster role bindings for that service account. You can then reference the service account in the ClusterLogForwarder
resource to control access permissions.
In order to support multi log forwarding in additional namespaces other than the |
In logging 5.8 and later, the Red Hat OpenShift Logging Operator provides collect-audit-logs
, collect-application-logs
, and collect-infrastructure-logs
cluster roles, which enable the collector to collect audit logs, application logs, and infrastructure logs respectively.
You can authorize RBAC permissions for log collection by binding the required cluster roles to a service account.
The Red Hat OpenShift Logging Operator is installed in the openshift-logging
namespace.
You have administrator permissions.
Create a service account for the collector. If you want to write logs to storage that requires a token for authentication, you must include a token in the service account.
Bind the appropriate cluster roles to the service account:
$ oc adm policy add-cluster-role-to-user <cluster_role_name> system:serviceaccount:<namespace_name>:<service_account_name>
To create a log forwarder, you must create a ClusterLogForwarder
CR that specifies the log input types that the service account can collect. You can also specify which outputs the logs can be forwarded to. If you are using the multi log forwarder feature, you must also reference the service account in the ClusterLogForwarder
CR.
If you are using the multi log forwarder feature on your cluster, you can create ClusterLogForwarder
custom resources (CRs) in any namespace, using any name.
If you are using a legacy implementation, the ClusterLogForwarder
CR must be named instance
, and must be created in the openshift-logging
namespace.
You need administrator permissions for the namespace where you create the |
apiVersion: "logging.openshift.io/v1"
kind: ClusterLogForwarder
metadata:
name: <log_forwarder_name> (1)
namespace: <log_forwarder_namespace> (2)
spec:
serviceAccountName: <service_account_name> (3)
pipelines:
- inputRefs:
- <log_type> (4)
outputRefs:
- <output_name> (5)
outputs:
- name: <output_name> (6)
type: <output_type> (5)
url: <log_output_url> (7)
# ...
1 | In legacy implementations, the CR name must be instance . In multi log forwarder implementations, you can use any name. |
||
2 | In legacy implementations, the CR namespace must be openshift-logging . In multi log forwarder implementations, you can use any namespace. |
||
3 | The name of your service account. The service account is only required in multi log forwarder implementations. | ||
4 | The log types that are collected. The value for this field can be audit for audit logs, application for application logs, infrastructure for infrastructure logs, or a named input that has been defined for your application. |
||
5 | The type of output that you want to forward logs to. The value of this field can be default , loki , kafka , elasticsearch , fluentdForward , syslog , or cloudwatch .
|
||
6 | A name for the output that you want to forward logs to. | ||
7 | The URL of the output that you want to forward logs to. |
Enables multi-line error detection of container logs.
Enabling this feature could have performance implications and may require additional computing resources or alternate logging solutions. |
Log parsers often incorrectly identify separate lines of the same exception as separate exceptions. This leads to extra log entries and an incomplete or inaccurate view of the traced information.
java.lang.NullPointerException: Cannot invoke "String.toString()" because "<param1>" is null
at testjava.Main.handle(Main.java:47)
at testjava.Main.printMe(Main.java:19)
at testjava.Main.main(Main.java:10)
To enable logging to detect multi-line exceptions and reassemble them into a single log entry, ensure that the ClusterLogForwarder
Custom Resource (CR) contains a detectMultilineErrors
field, with a value of true
.
apiVersion: logging.openshift.io/v1
kind: ClusterLogForwarder
metadata:
name: instance
namespace: openshift-logging
spec:
pipelines:
- name: my-app-logs
inputRefs:
- application
outputRefs:
- default
detectMultilineErrors: true
When log messages appear as a consecutive sequence forming an exception stack trace, they are combined into a single, unified log record. The first log message’s content is replaced with the concatenated content of all the message fields in the sequence.
Language | Fluentd | Vector |
---|---|---|
Java |
✓ |
✓ |
JS |
✓ |
✓ |
Ruby |
✓ |
✓ |
Python |
✓ |
✓ |
Golang |
✓ |
✓ |
PHP |
✓ |
|
Dart |
✓ |
✓ |
When enabled, the collector configuration will include a new section with type: detect_exceptions
[transforms.detect_exceptions_app-logs] type = "detect_exceptions" inputs = ["application"] languages = ["All"] group_by = ["kubernetes.namespace_name","kubernetes.pod_name","kubernetes.container_name"] expire_after_ms = 2000 multiline_flush_interval_ms = 1000
<label @MULTILINE_APP_LOGS> <match kubernetes.**> @type detect_exceptions remove_tag_prefix 'kubernetes' message message force_line_breaks true multiline_flush_interval .2 </match> </label>
By default, the logging sends container and infrastructure logs to the default internal log store defined in the ClusterLogging
custom resource. However, it does not send audit logs to the internal store because it does not provide secure storage. If this default configuration meets your needs, you do not need to configure the Cluster Log Forwarder.
To send audit logs to the internal Elasticsearch log store, use the Cluster Log Forwarder as described in Forwarding audit logs to the log store. |
To send logs to specific endpoints inside and outside your OpenShift Container Platform cluster, you specify a combination of outputs and pipelines in a ClusterLogForwarder
custom resource (CR). You can also use inputs to forward the application logs associated with a specific project to an endpoint. Authentication is provided by a Kubernetes Secret object.
The destination for log data that you define, or where you want the logs sent. An output can be one of the following types:
elasticsearch
. An external Elasticsearch instance. The elasticsearch
output can use a TLS connection.
fluentdForward
. An external log aggregation solution that supports Fluentd. This option uses the Fluentd forward protocols. The fluentForward
output can use a TCP or TLS connection and supports shared-key authentication by providing a shared_key field in a secret. Shared-key authentication can be used with or without TLS.
syslog
. An external log aggregation solution that supports the syslog RFC3164 or RFC5424 protocols. The syslog
output can use a UDP, TCP, or TLS connection.
cloudwatch
. Amazon CloudWatch, a monitoring and log storage service hosted by Amazon Web Services (AWS).
loki
. Loki, a horizontally scalable, highly available, multi-tenant log aggregation system.
kafka
. A Kafka broker. The kafka
output can use a TCP or TLS connection.
default
. The internal OpenShift Container Platform Elasticsearch instance. You are not required to configure the default output. If you do configure a default
output, you receive an error message because the default
output is reserved for the Red Hat OpenShift Logging Operator.
Defines simple routing from one log type to one or more outputs, or which logs you want to send. The log types are one of the following:
application
. Container logs generated by user applications running in the cluster, except infrastructure container applications.
infrastructure
. Container logs from pods that run in the openshift*
, kube*
, or default
projects and journal logs sourced from node file system.
audit
. Audit logs generated by the node audit system, auditd
, Kubernetes API server, OpenShift API server, and OVN network.
You can add labels to outbound log messages by using key:value
pairs in the pipeline. For example, you might add a label to messages that are forwarded to other data centers or label the logs by type. Labels that are added to objects are also forwarded with the log message.
Forwards the application logs associated with a specific project to a pipeline.
In the pipeline, you define which log types to forward using an inputRef
parameter and where to forward the logs to using an outputRef
parameter.
A key:value map
that contains confidential data such as user credentials.
Note the following:
If a ClusterLogForwarder
CR object exists, logs are not forwarded to the default Elasticsearch instance, unless there is a pipeline with the default
output.
By default, the logging sends container and infrastructure logs to the default internal Elasticsearch log store defined in the ClusterLogging
custom resource. However, it does not send audit logs to the internal store because it does not provide secure storage. If this default configuration meets your needs, do not configure the Log Forwarding API.
If you do not define a pipeline for a log type, the logs of the undefined types are dropped. For example, if you specify a pipeline for the application
and audit
types, but do not specify a pipeline for the infrastructure
type, infrastructure
logs are dropped.
You can use multiple types of outputs in the ClusterLogForwarder
custom resource (CR) to send logs to servers that support different protocols.
The internal OpenShift Container Platform Elasticsearch instance does not provide secure storage for audit logs. We recommend you ensure that the system to which you forward audit logs is compliant with your organizational and governmental regulations and is properly secured. The logging does not comply with those regulations.
The following example forwards the audit logs to a secure external Elasticsearch instance, the infrastructure logs to an insecure external Elasticsearch instance, the application logs to a Kafka broker, and the application logs from the my-apps-logs
project to the internal Elasticsearch instance.
apiVersion: "logging.openshift.io/v1"
kind: ClusterLogForwarder
metadata:
name: instance (1)
namespace: openshift-logging (2)
spec:
outputs:
- name: elasticsearch-secure (3)
type: "elasticsearch"
url: https://elasticsearch.secure.com:9200
secret:
name: elasticsearch
- name: elasticsearch-insecure (4)
type: "elasticsearch"
url: http://elasticsearch.insecure.com:9200
- name: kafka-app (5)
type: "kafka"
url: tls://kafka.secure.com:9093/app-topic
inputs: (6)
- name: my-app-logs
application:
namespaces:
- my-project
pipelines:
- name: audit-logs (7)
inputRefs:
- audit
outputRefs:
- elasticsearch-secure
- default
labels:
secure: "true" (8)
datacenter: "east"
- name: infrastructure-logs (9)
inputRefs:
- infrastructure
outputRefs:
- elasticsearch-insecure
labels:
datacenter: "west"
- name: my-app (10)
inputRefs:
- my-app-logs
outputRefs:
- default
- inputRefs: (11)
- application
outputRefs:
- kafka-app
labels:
datacenter: "south"
1 | The name of the ClusterLogForwarder CR must be instance . |
2 | The namespace for the ClusterLogForwarder CR must be openshift-logging . |
3 | Configuration for an secure Elasticsearch output using a secret with a secure URL.
|
4 | Configuration for an insecure Elasticsearch output:
|
5 | Configuration for a Kafka output using a client-authenticated TLS communication over a secure URL
|
6 | Configuration for an input to filter application logs from the my-project namespace. |
7 | Configuration for a pipeline to send audit logs to the secure external Elasticsearch instance:
|
8 | Optional: String. One or more labels to add to the logs. Quote values like "true" so they are recognized as string values, not as a boolean. |
9 | Configuration for a pipeline to send infrastructure logs to the insecure external Elasticsearch instance. |
10 | Configuration for a pipeline to send logs from the my-project project to the internal Elasticsearch instance.
|
11 | Configuration for a pipeline to send logs to the Kafka broker, with no pipeline name:
|
If your external logging aggregator becomes unavailable and cannot receive logs, Fluentd continues to collect logs and stores them in a buffer. When the log aggregator becomes available, log forwarding resumes, including the buffered logs. If the buffer fills completely, Fluentd stops collecting logs. OpenShift Container Platform rotates the logs and deletes them. You cannot adjust the buffer size or add a persistent volume claim (PVC) to the Fluentd daemon set or pods.
Common key types are provided here. Some output types support additional specialized keys, documented with the output-specific configuration field. All secret keys are optional. Enable the security features you want by setting the relevant keys. You are responsible for creating and maintaining any additional configurations that external destinations might require, such as keys and secrets, service accounts, port openings, or global proxy configuration. Open Shift Logging will not attempt to verify a mismatch between authorization combinations.
Using a TLS URL ('http://…' or 'ssl://…') without a Secret enables basic TLS server-side authentication. Additional TLS features are enabled by including a Secret and setting the following optional fields:
tls.crt
: (string) File name containing a client certificate. Enables mutual authentication. Requires tls.key
.
tls.key
: (string) File name containing the private key to unlock the client certificate. Requires tls.crt
.
passphrase
: (string) Passphrase to decode an encoded TLS private key. Requires tls.key
.
ca-bundle.crt
: (string) File name of a customer CA for server authentication.
username
: (string) Authentication user name. Requires password
.
password
: (string) Authentication password. Requires username
.
sasl.enable
(boolean) Explicitly enable or disable SASL.
If missing, SASL is automatically enabled when any of the other sasl.
keys are set.
sasl.mechanisms
: (array) List of allowed SASL mechanism names.
If missing or empty, the system defaults are used.
sasl.allow-insecure
: (boolean) Allow mechanisms that send clear-text passwords. Defaults to false.
You can create a secret in the directory that contains your certificate and key files by using the following command:
$ oc create secret generic -n openshift-logging <my-secret> \ --from-file=tls.key=<your_key_file> --from-file=tls.crt=<your_crt_file> --from-file=ca-bundle.crt=<your_bundle_file> --from-literal=username=<your_username> --from-literal=password=<your_password>
Generic or opaque secrets are recommended for best results. |
You can forward structured logs from different containers within the same pod to different indices. To use this feature, you must configure the pipeline with multi-container support and annotate the pods. Logs are written to indices with a prefix of app-
. It is recommended that Elasticsearch be configured with aliases to accommodate this.
JSON formatting of logs varies by application. Because creating too many indices impacts performance, limit your use of this feature to creating indices for logs that have incompatible JSON formats. Use queries to separate logs from different namespaces, or applications with compatible JSON formats. |
Logging for Red Hat OpenShift: 5.5
Create or edit a YAML file that defines the ClusterLogForwarder
CR object:
apiVersion: "logging.openshift.io/v1"
kind: ClusterLogForwarder
metadata:
name: instance
namespace: openshift-logging
spec:
outputDefaults:
elasticsearch:
structuredTypeKey: kubernetes.labels.logFormat (1)
structuredTypeName: nologformat
enableStructuredContainerLogs: true (2)
pipelines:
- inputRefs:
- application
name: application-logs
outputRefs:
- default
parse: json
1 | Uses the value of the key-value pair that is formed by the Kubernetes logFormat label. |
2 | Enables multi-container outputs. |
Create or edit a YAML file that defines the Pod
CR object:
apiVersion: v1
kind: Pod
metadata:
annotations:
containerType.logging.openshift.io/heavy: heavy (1)
containerType.logging.openshift.io/low: low
spec:
containers:
- name: heavy (2)
image: heavyimage
- name: low
image: lowimage
1 | Format: containerType.logging.openshift.io/<container-name>: <index> |
2 | Annotation names must match container names |
This configuration might significantly increase the number of shards on the cluster. |
You can optionally forward logs to an external Elasticsearch instance in addition to, or instead of, the internal OpenShift Container Platform Elasticsearch instance. You are responsible for configuring the external log aggregator to receive log data from OpenShift Container Platform.
To configure log forwarding to an external Elasticsearch instance, you must create a ClusterLogForwarder
custom resource (CR) with an output to that instance, and a pipeline that uses the output. The external Elasticsearch output can use the HTTP (insecure) or HTTPS (secure HTTP) connection.
To forward logs to both an external and the internal Elasticsearch instance, create outputs and pipelines to the external instance and a pipeline that uses the default
output to forward logs to the internal instance. You do not need to create a default
output. If you do configure a default
output, you receive an error message because the default
output is reserved for the Red Hat OpenShift Logging Operator.
If you want to forward logs to only the internal OpenShift Container Platform Elasticsearch instance, you do not need to create a |
You must have a logging server that is configured to receive the logging data using the specified protocol or format.
Create or edit a YAML file that defines the ClusterLogForwarder
CR object:
apiVersion: "logging.openshift.io/v1"
kind: ClusterLogForwarder
metadata:
name: instance (1)
namespace: openshift-logging (2)
spec:
outputs:
- name: elasticsearch-insecure (3)
type: "elasticsearch" (4)
url: http://elasticsearch.insecure.com:9200 (5)
- name: elasticsearch-secure
type: "elasticsearch"
url: https://elasticsearch.secure.com:9200 (6)
secret:
name: es-secret (7)
pipelines:
- name: application-logs (8)
inputRefs: (9)
- application
- audit
outputRefs:
- elasticsearch-secure (10)
- default (11)
labels:
myLabel: "myValue" (12)
- name: infrastructure-audit-logs (13)
inputRefs:
- infrastructure
outputRefs:
- elasticsearch-insecure
labels:
logs: "audit-infra"
1 | The name of the ClusterLogForwarder CR must be instance . |
2 | The namespace for the ClusterLogForwarder CR must be openshift-logging . |
3 | Specify a name for the output. |
4 | Specify the elasticsearch type. |
5 | Specify the URL and port of the external Elasticsearch instance as a valid absolute URL. You can use the http (insecure) or https (secure HTTP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP Address. |
6 | For a secure connection, you can specify an https or http URL that you authenticate by specifying a secret . |
7 | For an https prefix, specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the openshift-logging project, and must have keys of: tls.crt, tls.key, and ca-bundle.crt that point to the respective certificates that they represent. Otherwise, for http and https prefixes, you can specify a secret that contains a username and password. For more information, see the following "Example: Setting secret that contains a username and password." |
8 | Optional: Specify a name for the pipeline. |
9 | Specify which log types to forward by using the pipeline: application, infrastructure , or audit . |
10 | Specify the name of the output to use when forwarding logs with this pipeline. |
11 | Optional: Specify the default output to send the logs to the internal Elasticsearch instance. |
12 | Optional: String. One or more labels to add to the logs. |
13 | Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
|
Create the CR object:
$ oc create -f <file-name>.yaml
You can use a secret that contains a username and password to authenticate a secure connection to an external Elasticsearch instance.
For example, if you cannot use mutual TLS (mTLS) keys because a third party operates the Elasticsearch instance, you can use HTTP or HTTPS and set a secret that contains the username and password.
Create a Secret
YAML file similar to the following example. Use base64-encoded values for the username
and password
fields. The secret type is opaque by default.
apiVersion: v1
kind: Secret
metadata:
name: openshift-test-secret
data:
username: <username>
password: <password>
Create the secret:
$ oc create secret -n openshift-logging openshift-test-secret.yaml
Specify the name of the secret in the ClusterLogForwarder
CR:
kind: ClusterLogForwarder
metadata:
name: instance
namespace: openshift-logging
spec:
outputs:
- name: elasticsearch
type: "elasticsearch"
url: https://elasticsearch.secure.com:9200
secret:
name: openshift-test-secret
In the value of the |
Create the CR object:
$ oc create -f <file-name>.yaml
You can use the Fluentd forward protocol to send a copy of your logs to an external log aggregator that is configured to accept the protocol instead of, or in addition to, the default Elasticsearch log store. You are responsible for configuring the external log aggregator to receive the logs from OpenShift Container Platform.
To configure log forwarding using the forward protocol, you must create a ClusterLogForwarder
custom resource (CR) with one or more outputs to the Fluentd servers, and pipelines that use those outputs. The Fluentd output can use a TCP (insecure) or TLS (secure TCP) connection.
You must have a logging server that is configured to receive the logging data using the specified protocol or format.
Create or edit a YAML file that defines the ClusterLogForwarder
CR object:
apiVersion: logging.openshift.io/v1
kind: ClusterLogForwarder
metadata:
name: instance (1)
namespace: openshift-logging (2)
spec:
outputs:
- name: fluentd-server-secure (3)
type: fluentdForward (4)
url: 'tls://fluentdserver.security.example.com:24224' (5)
secret: (6)
name: fluentd-secret
- name: fluentd-server-insecure
type: fluentdForward
url: 'tcp://fluentdserver.home.example.com:24224'
pipelines:
- name: forward-to-fluentd-secure (7)
inputRefs: (8)
- application
- audit
outputRefs:
- fluentd-server-secure (9)
- default (10)
labels:
clusterId: "C1234" (11)
- name: forward-to-fluentd-insecure (12)
inputRefs:
- infrastructure
outputRefs:
- fluentd-server-insecure
labels:
clusterId: "C1234"
1 | The name of the ClusterLogForwarder CR must be instance . |
2 | The namespace for the ClusterLogForwarder CR must be openshift-logging . |
3 | Specify a name for the output. |
4 | Specify the fluentdForward type. |
5 | Specify the URL and port of the external Fluentd instance as a valid absolute URL. You can use the tcp (insecure) or tls (secure TCP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address. |
6 | If using a tls prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the openshift-logging project, and must have keys of: tls.crt, tls.key, and ca-bundle.crt that point to the respective certificates that they represent. |
7 | Optional: Specify a name for the pipeline. |
8 | Specify which log types to forward by using the pipeline: application, infrastructure , or audit . |
9 | Specify the name of the output to use when forwarding logs with this pipeline. |
10 | Optional: Specify the default output to forward logs to the internal Elasticsearch instance. |
11 | Optional: String. One or more labels to add to the logs. |
12 | Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
|
Create the CR object:
$ oc create -f <file-name>.yaml
For Logstash to ingest log data from fluentd, you must enable nanosecond precision in the Logstash configuration file.
In the Logstash configuration file, set nanosecond_precision
to true
.
input { tcp { codec => fluent { nanosecond_precision => true } port => 24114 } }
filter { }
output { stdout { codec => rubydebug } }
You can use the syslog RFC3164 or RFC5424 protocol to send a copy of your logs to an external log aggregator that is configured to accept the protocol instead of, or in addition to, the default Elasticsearch log store. You are responsible for configuring the external log aggregator, such as a syslog server, to receive the logs from OpenShift Container Platform.
To configure log forwarding using the syslog protocol, you must create a ClusterLogForwarder
custom resource (CR) with one or more outputs to the syslog servers, and pipelines that use those outputs. The syslog output can use a UDP, TCP, or TLS connection.
You must have a logging server that is configured to receive the logging data using the specified protocol or format.
Create or edit a YAML file that defines the ClusterLogForwarder
CR object:
apiVersion: logging.openshift.io/v1
kind: ClusterLogForwarder
metadata:
name: instance (1)
namespace: openshift-logging (2)
spec:
outputs:
- name: rsyslog-east (3)
type: syslog (4)
syslog: (5)
facility: local0
rfc: RFC3164
payloadKey: message
severity: informational
url: 'tls://rsyslogserver.east.example.com:514' (6)
secret: (7)
name: syslog-secret
- name: rsyslog-west
type: syslog
syslog:
appName: myapp
facility: user
msgID: mymsg
procID: myproc
rfc: RFC5424
severity: debug
url: 'tcp://rsyslogserver.west.example.com:514'
pipelines:
- name: syslog-east (8)
inputRefs: (9)
- audit
- application
outputRefs: (10)
- rsyslog-east
- default (11)
labels:
secure: "true" (12)
syslog: "east"
- name: syslog-west (13)
inputRefs:
- infrastructure
outputRefs:
- rsyslog-west
- default
labels:
syslog: "west"
1 | The name of the ClusterLogForwarder CR must be instance . |
2 | The namespace for the ClusterLogForwarder CR must be openshift-logging . |
3 | Specify a name for the output. |
4 | Specify the syslog type. |
5 | Optional: Specify the syslog parameters, listed below. |
6 | Specify the URL and port of the external syslog instance. You can use the udp (insecure), tcp (insecure) or tls (secure TCP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address. |
7 | If using a tls prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the openshift-logging project, and must have keys of: tls.crt, tls.key, and ca-bundle.crt that point to the respective certificates that they represent. |
8 | Optional: Specify a name for the pipeline. |
9 | Specify which log types to forward by using the pipeline: application, infrastructure , or audit . |
10 | Specify the name of the output to use when forwarding logs with this pipeline. |
11 | Optional: Specify the default output to forward logs to the internal Elasticsearch instance. |
12 | Optional: String. One or more labels to add to the logs. Quote values like "true" so they are recognized as string values, not as a boolean. |
13 | Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
|
Create the CR object:
$ oc create -f <file-name>.yaml
You can add namespace_name
, pod_name
, and container_name
elements to the message
field of the record by adding the AddLogSource
field to your ClusterLogForwarder
custom resource (CR).
spec:
outputs:
- name: syslogout
syslog:
addLogSource: true
facility: user
payloadKey: message
rfc: RFC3164
severity: debug
tag: mytag
type: syslog
url: tls://syslog-receiver.openshift-logging.svc:24224
pipelines:
- inputRefs:
- application
name: test-app
outputRefs:
- syslogout
This configuration is compatible with both RFC3164 and RFC5424. |
AddLogSource
<15>1 2020-11-15T17:06:14+00:00 fluentd-9hkb4 mytag - - - {"msgcontent"=>"Message Contents", "timestamp"=>"2020-11-15 17:06:09", "tag_key"=>"rec_tag", "index"=>56}
AddLogSource
<15>1 2020-11-16T10:49:37+00:00 crc-j55b9-master-0 mytag - - - namespace_name=clo-test-6327,pod_name=log-generator-ff9746c49-qxm7l,container_name=log-generator,message={"msgcontent":"My life is my message", "timestamp":"2020-11-16 10:49:36", "tag_key":"rec_tag", "index":76}
You can configure the following for the syslog
outputs. For more information, see the syslog RFC3164 or RFC5424 RFC.
facility: The syslog facility. The value can be a decimal integer or a case-insensitive keyword:
0
or kern
for kernel messages
1
or user
for user-level messages, the default.
2
or mail
for the mail system
3
or daemon
for system daemons
4
or auth
for security/authentication messages
5
or syslog
for messages generated internally by syslogd
6
or lpr
for the line printer subsystem
7
or news
for the network news subsystem
8
or uucp
for the UUCP subsystem
9
or cron
for the clock daemon
10
or authpriv
for security authentication messages
11
or ftp
for the FTP daemon
12
or ntp
for the NTP subsystem
13
or security
for the syslog audit log
14
or console
for the syslog alert log
15
or solaris-cron
for the scheduling daemon
16
–23
or local0
– local7
for locally used facilities
Optional: payloadKey
: The record field to use as payload for the syslog message.
Configuring the |
rfc: The RFC to be used for sending logs using syslog. The default is RFC5424.
severity: The syslog severity to set on outgoing syslog records. The value can be a decimal integer or a case-insensitive keyword:
0
or Emergency
for messages indicating the system is unusable
1
or Alert
for messages indicating action must be taken immediately
2
or Critical
for messages indicating critical conditions
3
or Error
for messages indicating error conditions
4
or Warning
for messages indicating warning conditions
5
or Notice
for messages indicating normal but significant conditions
6
or Informational
for messages indicating informational messages
7
or Debug
for messages indicating debug-level messages, the default
tag: Tag specifies a record field to use as a tag on the syslog message.
trimPrefix: Remove the specified prefix from the tag.
The following parameters apply to RFC5424:
appName: The APP-NAME is a free-text string that identifies the application that sent the log. Must be specified for RFC5424
.
msgID: The MSGID is a free-text string that identifies the type of message. Must be specified for RFC5424
.
procID: The PROCID is a free-text string. A change in the value indicates a discontinuity in syslog reporting. Must be specified for RFC5424
.
You can forward logs to an external Kafka broker in addition to, or instead of, the default log store.
To configure log forwarding to an external Kafka instance, you must create a ClusterLogForwarder
custom resource (CR) with an output to that instance, and a pipeline that uses the output. You can include a specific Kafka topic in the output or use the default. The Kafka output can use a TCP (insecure) or TLS (secure TCP) connection.
Create or edit a YAML file that defines the ClusterLogForwarder
CR object:
apiVersion: logging.openshift.io/v1
kind: ClusterLogForwarder
metadata:
name: instance (1)
namespace: openshift-logging (2)
spec:
outputs:
- name: app-logs (3)
type: kafka (4)
url: tls://kafka.example.devlab.com:9093/app-topic (5)
secret:
name: kafka-secret (6)
- name: infra-logs
type: kafka
url: tcp://kafka.devlab2.example.com:9093/infra-topic (7)
- name: audit-logs
type: kafka
url: tls://kafka.qelab.example.com:9093/audit-topic
secret:
name: kafka-secret-qe
pipelines:
- name: app-topic (8)
inputRefs: (9)
- application
outputRefs: (10)
- app-logs
labels:
logType: "application" (11)
- name: infra-topic (12)
inputRefs:
- infrastructure
outputRefs:
- infra-logs
labels:
logType: "infra"
- name: audit-topic
inputRefs:
- audit
outputRefs:
- audit-logs
- default (13)
labels:
logType: "audit"
1 | The name of the ClusterLogForwarder CR must be instance . |
2 | The namespace for the ClusterLogForwarder CR must be openshift-logging . |
3 | Specify a name for the output. |
4 | Specify the kafka type. |
5 | Specify the URL and port of the Kafka broker as a valid absolute URL, optionally with a specific topic. You can use the tcp (insecure) or tls (secure TCP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address. |
6 | If using a tls prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the openshift-logging project, and must have keys of: tls.crt, tls.key, and ca-bundle.crt that point to the respective certificates that they represent. |
7 | Optional: To send an insecure output, use a tcp prefix in front of the URL. Also omit the secret key and its name from this output. |
8 | Optional: Specify a name for the pipeline. |
9 | Specify which log types to forward by using the pipeline: application, infrastructure , or audit . |
10 | Specify the name of the output to use when forwarding logs with this pipeline. |
11 | Optional: String. One or more labels to add to the logs. |
12 | Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
|
13 | Optional: Specify default to forward logs to the internal Elasticsearch instance. |
Optional: To forward a single output to multiple Kafka brokers, specify an array of Kafka brokers as shown in the following example:
# ...
spec:
outputs:
- name: app-logs
type: kafka
secret:
name: kafka-secret-dev
kafka: (1)
brokers: (2)
- tls://kafka-broker1.example.com:9093/
- tls://kafka-broker2.example.com:9093/
topic: app-topic (3)
# ...
1 | Specify a kafka key that has a brokers and topic key. |
2 | Use the brokers key to specify an array of one or more brokers. |
3 | Use the topic key to specify the target topic that receives the logs. |
Apply the ClusterLogForwarder
CR by running the following command:
$ oc apply -f <filename>.yaml
You can forward logs to Amazon CloudWatch, a monitoring and log storage service hosted by Amazon Web Services (AWS). You can forward logs to CloudWatch in addition to, or instead of, the default log store.
To configure log forwarding to CloudWatch, you must create a ClusterLogForwarder
custom resource (CR) with an output for CloudWatch, and a pipeline that uses the output.
Create a Secret
YAML file that uses the aws_access_key_id
and aws_secret_access_key
fields to specify your base64-encoded AWS credentials. For example:
apiVersion: v1
kind: Secret
metadata:
name: cw-secret
namespace: openshift-logging
data:
aws_access_key_id: QUtJQUlPU0ZPRE5ON0VYQU1QTEUK
aws_secret_access_key: d0phbHJYVXRuRkVNSS9LN01ERU5HL2JQeFJmaUNZRVhBTVBMRUtFWQo=
Create the secret. For example:
$ oc apply -f cw-secret.yaml
Create or edit a YAML file that defines the ClusterLogForwarder
CR object. In the file, specify the name of the secret. For example:
apiVersion: "logging.openshift.io/v1"
kind: ClusterLogForwarder
metadata:
name: instance (1)
namespace: openshift-logging (2)
spec:
outputs:
- name: cw (3)
type: cloudwatch (4)
cloudwatch:
groupBy: logType (5)
groupPrefix: <group prefix> (6)
region: us-east-2 (7)
secret:
name: cw-secret (8)
pipelines:
- name: infra-logs (9)
inputRefs: (10)
- infrastructure
- audit
- application
outputRefs:
- cw (11)
1 | The name of the ClusterLogForwarder CR must be instance . |
2 | The namespace for the ClusterLogForwarder CR must be openshift-logging . |
3 | Specify a name for the output. |
4 | Specify the cloudwatch type. |
5 | Optional: Specify how to group the logs:
|
6 | Optional: Specify a string to replace the default infrastructureName prefix in the names of the log groups. |
7 | Specify the AWS region. |
8 | Specify the name of the secret that contains your AWS credentials. |
9 | Optional: Specify a name for the pipeline. |
10 | Specify which log types to forward by using the pipeline: application, infrastructure , or audit . |
11 | Specify the name of the output to use when forwarding logs with this pipeline. |
Create the CR object:
$ oc create -f <file-name>.yaml
Here, you see an example ClusterLogForwarder
custom resource (CR) and the log data that it outputs to Amazon CloudWatch.
Suppose that you are running
an OpenShift Container Platform cluster
named mycluster
. The following command returns the cluster’s infrastructureName
, which you will use to compose aws
commands later on:
$ oc get Infrastructure/cluster -ojson | jq .status.infrastructureName
"mycluster-7977k"
To generate log data for this example, you run a busybox
pod in a namespace called app
. The busybox
pod writes a message to stdout every three seconds:
$ oc run busybox --image=busybox -- sh -c 'while true; do echo "My life is my message"; sleep 3; done'
$ oc logs -f busybox
My life is my message
My life is my message
My life is my message
...
You can look up the UUID of the app
namespace where the busybox
pod runs:
$ oc get ns/app -ojson | jq .metadata.uid
"794e1e1a-b9f5-4958-a190-e76a9b53d7bf"
In your ClusterLogForwarder
custom resource (CR), you configure the infrastructure
, audit
, and application
log types as inputs to the all-logs
pipeline. You also connect this pipeline to cw
output, which forwards the logs to a CloudWatch instance in the us-east-2
region:
apiVersion: "logging.openshift.io/v1"
kind: ClusterLogForwarder
metadata:
name: instance
namespace: openshift-logging
spec:
outputs:
- name: cw
type: cloudwatch
cloudwatch:
groupBy: logType
region: us-east-2
secret:
name: cw-secret
pipelines:
- name: all-logs
inputRefs:
- infrastructure
- audit
- application
outputRefs:
- cw
Each region in CloudWatch contains three levels of objects:
log group
log stream
log event
With groupBy: logType
in the ClusterLogForwarding
CR, the three log types in the inputRefs
produce three log groups in Amazon Cloudwatch:
$ aws --output json logs describe-log-groups | jq .logGroups[].logGroupName
"mycluster-7977k.application"
"mycluster-7977k.audit"
"mycluster-7977k.infrastructure"
Each of the log groups contains log streams:
$ aws --output json logs describe-log-streams --log-group-name mycluster-7977k.application | jq .logStreams[].logStreamName
"kubernetes.var.log.containers.busybox_app_busybox-da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76.log"
$ aws --output json logs describe-log-streams --log-group-name mycluster-7977k.audit | jq .logStreams[].logStreamName
"ip-10-0-131-228.us-east-2.compute.internal.k8s-audit.log"
"ip-10-0-131-228.us-east-2.compute.internal.linux-audit.log"
"ip-10-0-131-228.us-east-2.compute.internal.openshift-audit.log"
...
$ aws --output json logs describe-log-streams --log-group-name mycluster-7977k.infrastructure | jq .logStreams[].logStreamName
"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-69f9fd9b58-zqzw5_openshift-oauth-apiserver_oauth-apiserver-453c5c4ee026fe20a6139ba6b1cdd1bed25989c905bf5ac5ca211b7cbb5c3d7b.log"
"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-797774f7c5-lftrx_openshift-apiserver_openshift-apiserver-ce51532df7d4e4d5f21c4f4be05f6575b93196336be0027067fd7d93d70f66a4.log"
"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-797774f7c5-lftrx_openshift-apiserver_openshift-apiserver-check-endpoints-82a9096b5931b5c3b1d6dc4b66113252da4a6472c9fff48623baee761911a9ef.log"
...
Each log stream contains log events. To see a log event from the busybox
Pod, you specify its log stream from the application
log group:
$ aws logs get-log-events --log-group-name mycluster-7977k.application --log-stream-name kubernetes.var.log.containers.busybox_app_busybox-da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76.log
{
"events": [
{
"timestamp": 1629422704178,
"message": "{\"docker\":{\"container_id\":\"da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76\"},\"kubernetes\":{\"container_name\":\"busybox\",\"namespace_name\":\"app\",\"pod_name\":\"busybox\",\"container_image\":\"docker.io/library/busybox:latest\",\"container_image_id\":\"docker.io/library/busybox@sha256:0f354ec1728d9ff32edcd7d1b8bbdfc798277ad36120dc3dc683be44524c8b60\",\"pod_id\":\"870be234-90a3-4258-b73f-4f4d6e2777c7\",\"host\":\"ip-10-0-216-3.us-east-2.compute.internal\",\"labels\":{\"run\":\"busybox\"},\"master_url\":\"https://kubernetes.default.svc\",\"namespace_id\":\"794e1e1a-b9f5-4958-a190-e76a9b53d7bf\",\"namespace_labels\":{\"kubernetes_io/metadata_name\":\"app\"}},\"message\":\"My life is my message\",\"level\":\"unknown\",\"hostname\":\"ip-10-0-216-3.us-east-2.compute.internal\",\"pipeline_metadata\":{\"collector\":{\"ipaddr4\":\"10.0.216.3\",\"inputname\":\"fluent-plugin-systemd\",\"name\":\"fluentd\",\"received_at\":\"2021-08-20T01:25:08.085760+00:00\",\"version\":\"1.7.4 1.6.0\"}},\"@timestamp\":\"2021-08-20T01:25:04.178986+00:00\",\"viaq_index_name\":\"app-write\",\"viaq_msg_id\":\"NWRjZmUyMWQtZjgzNC00MjI4LTk3MjMtNTk3NmY3ZjU4NDk1\",\"log_type\":\"application\",\"time\":\"2021-08-20T01:25:04+00:00\"}",
"ingestionTime": 1629422744016
},
...
In the log group names, you can replace the default infrastructureName
prefix, mycluster-7977k
, with an arbitrary string like demo-group-prefix
. To make this change, you update the groupPrefix
field in the ClusterLogForwarding
CR:
cloudwatch:
groupBy: logType
groupPrefix: demo-group-prefix
region: us-east-2
The value of groupPrefix
replaces the default infrastructureName
prefix:
$ aws --output json logs describe-log-groups | jq .logGroups[].logGroupName
"demo-group-prefix.application"
"demo-group-prefix.audit"
"demo-group-prefix.infrastructure"
For each application namespace in your cluster, you can create a log group in CloudWatch whose name is based on the name of the application namespace.
If you delete an application namespace object and create a new one that has the same name, CloudWatch continues using the same log group as before.
If you consider successive application namespace objects that have the same name as equivalent to each other, use the approach described in this example. Otherwise, if you need to distinguish the resulting log groups from each other, see the following "Naming log groups for application namespace UUIDs" section instead.
To create application log groups whose names are based on the names of the application namespaces, you set the value of the groupBy
field to namespaceName
in the ClusterLogForwarder
CR:
cloudwatch:
groupBy: namespaceName
region: us-east-2
Setting groupBy
to namespaceName
affects the application log group only. It does not affect the audit
and infrastructure
log groups.
In Amazon Cloudwatch, the namespace name appears at the end of each log group name. Because there is a single application namespace, "app", the following output shows a new mycluster-7977k.app
log group instead of mycluster-7977k.application
:
$ aws --output json logs describe-log-groups | jq .logGroups[].logGroupName
"mycluster-7977k.app"
"mycluster-7977k.audit"
"mycluster-7977k.infrastructure"
If the cluster in this example had contained multiple application namespaces, the output would show multiple log groups, one for each namespace.
The groupBy
field affects the application log group only. It does not affect the audit
and infrastructure
log groups.
For each application namespace in your cluster, you can create a log group in CloudWatch whose name is based on the UUID of the application namespace.
If you delete an application namespace object and create a new one, CloudWatch creates a new log group.
If you consider successive application namespace objects with the same name as different from each other, use the approach described in this example. Otherwise, see the preceding "Example: Naming log groups for application namespace names" section instead.
To name log groups after application namespace UUIDs, you set the value of the groupBy
field to namespaceUUID
in the ClusterLogForwarder
CR:
cloudwatch:
groupBy: namespaceUUID
region: us-east-2
In Amazon Cloudwatch, the namespace UUID appears at the end of each log group name. Because there is a single application namespace, "app", the following output shows a new mycluster-7977k.794e1e1a-b9f5-4958-a190-e76a9b53d7bf
log group instead of mycluster-7977k.application
:
$ aws --output json logs describe-log-groups | jq .logGroups[].logGroupName
"mycluster-7977k.794e1e1a-b9f5-4958-a190-e76a9b53d7bf" // uid of the "app" namespace
"mycluster-7977k.audit"
"mycluster-7977k.infrastructure"
The groupBy
field affects the application log group only. It does not affect the audit
and infrastructure
log groups.
For clusters with AWS Security Token Service (STS) enabled, you can create an AWS service account manually or create a credentials request by using the
Cloud Credential Operator(CCO)
utility ccoctl
.
Logging for Red Hat OpenShift: 5.5 and later
Create a CredentialsRequest
custom resource YAML by using the template below:
apiVersion: cloudcredential.openshift.io/v1
kind: CredentialsRequest
metadata:
name: <your_role_name>-credrequest
namespace: openshift-cloud-credential-operator
spec:
providerSpec:
apiVersion: cloudcredential.openshift.io/v1
kind: AWSProviderSpec
statementEntries:
- action:
- logs:PutLogEvents
- logs:CreateLogGroup
- logs:PutRetentionPolicy
- logs:CreateLogStream
- logs:DescribeLogGroups
- logs:DescribeLogStreams
effect: Allow
resource: arn:aws:logs:*:*:*
secretRef:
name: <your_role_name>
namespace: openshift-logging
serviceAccountNames:
- logcollector
Use the ccoctl
command to create a role for AWS using your CredentialsRequest
CR. With the CredentialsRequest
object, this ccoctl
command creates an IAM role with a trust policy that is tied to the specified OIDC identity provider, and a permissions policy that grants permissions to perform operations on CloudWatch resources. This command also creates a YAML configuration file in /<path_to_ccoctl_output_dir>/manifests/openshift-logging-<your_role_name>-credentials.yaml
. This secret file contains the role_arn
key/value used during authentication with the AWS IAM identity provider.
$ ccoctl aws create-iam-roles \
--name=<name> \
--region=<aws_region> \
--credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests \
--identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com (1)
1 | <name> is the name used to tag your cloud resources and should match the name used during your STS cluster install |
Apply the secret created:
$ oc apply -f output/manifests/openshift-logging-<your_role_name>-credentials.yaml
Create or edit a ClusterLogForwarder
custom resource:
apiVersion: "logging.openshift.io/v1"
kind: ClusterLogForwarder
metadata:
name: instance (1)
namespace: openshift-logging (2)
spec:
outputs:
- name: cw (3)
type: cloudwatch (4)
cloudwatch:
groupBy: logType (5)
groupPrefix: <group prefix> (6)
region: us-east-2 (7)
secret:
name: <your_role_name> (8)
pipelines:
- name: to-cloudwatch (9)
inputRefs: (10)
- infrastructure
- audit
- application
outputRefs:
- cw (11)
1 | The name of the ClusterLogForwarder CR must be instance . |
2 | The namespace for the ClusterLogForwarder CR must be openshift-logging . |
3 | Specify a name for the output. |
4 | Specify the cloudwatch type. |
5 | Optional: Specify how to group the logs:
|
6 | Optional: Specify a string to replace the default infrastructureName prefix in the names of the log groups. |
7 | Specify the AWS region. |
8 | Specify the name of the secret that contains your AWS credentials. |
9 | Optional: Specify a name for the pipeline. |
10 | Specify which log types to forward by using the pipeline: application, infrastructure , or audit . |
11 | Specify the name of the output to use when forwarding logs with this pipeline. |
If you have an existing role for AWS, you can create a secret for AWS with STS using the oc create secret --from-literal
command.
In the CLI, enter the following to generate a secret for AWS:
$ oc create secret generic cw-sts-secret -n openshift-logging --from-literal=role_arn=arn:aws:iam::123456789012:role/my-role_with-permissions
apiVersion: v1
kind: Secret
metadata:
namespace: openshift-logging
name: my-secret-name
stringData:
role_arn: arn:aws:iam::123456789012:role/my-role_with-permissions
You can forward logs to an external Loki logging system in addition to, or instead of, the internal default OpenShift Container Platform Elasticsearch instance.
To configure log forwarding to Loki, you must create a ClusterLogForwarder
custom resource (CR) with an output to Loki, and a pipeline that uses the output. The output to Loki can use the HTTP (insecure) or HTTPS (secure HTTP) connection.
You must have a Loki logging system running at the URL you specify with the url
field in the CR.
Create or edit a YAML file that defines the ClusterLogForwarder
CR object:
apiVersion: "logging.openshift.io/v1"
kind: ClusterLogForwarder
metadata:
name: instance (1)
namespace: openshift-logging (2)
spec:
outputs:
- name: loki-insecure (3)
type: "loki" (4)
url: http://loki.insecure.com:3100 (5)
loki:
tenantKey: kubernetes.namespace_name
labelKeys: kubernetes.labels.foo
- name: loki-secure (6)
type: "loki"
url: https://loki.secure.com:3100
secret:
name: loki-secret (7)
loki:
tenantKey: kubernetes.namespace_name (8)
labelKeys: kubernetes.labels.foo (9)
pipelines:
- name: application-logs (10)
inputRefs: (11)
- application
- audit
outputRefs: (12)
- loki-secure
1 | The name of the ClusterLogForwarder CR must be instance . |
2 | The namespace for the ClusterLogForwarder CR must be openshift-logging . |
3 | Specify a name for the output. |
4 | Specify the type as "loki" . |
5 | Specify the URL and port of the Loki system as a valid absolute URL. You can use the http (insecure) or https (secure HTTP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP Address. Loki’s default port for HTTP(S) communication is 3100. |
6 | For a secure connection, you can specify an https or http URL that you authenticate by specifying a secret . |
7 | For an https prefix, specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the openshift-logging project, and must have keys of: tls.crt, tls.key, and ca-bundle.crt that point to the respective certificates that they represent. Otherwise, for http and https prefixes, you can specify a secret that contains a username and password. For more information, see the following "Example: Setting secret that contains a username and password." |
8 | Optional: Specify a meta-data key field to generate values for the TenantID field in Loki. For example, setting tenantKey: kubernetes.namespace_name uses the names of the Kubernetes namespaces as values for tenant IDs in Loki. To see which other log record fields you can specify, see the "Log Record Fields" link in the following "Additional resources" section. |
9 | Optional: Specify a list of meta-data field keys to replace the default Loki labels. Loki label names must match the regular expression [a-zA-Z_:][a-zA-Z0-9_:]* . Illegal characters in meta-data keys are replaced with _ to form the label name. For example, the kubernetes.labels.foo meta-data key becomes Loki label kubernetes_labels_foo . If you do not set labelKeys , the default value is: [log_type, kubernetes.namespace_name, kubernetes.pod_name, kubernetes_host] . Keep the set of labels small because Loki limits the size and number of labels allowed. See Configuring Loki, limits_config. You can still query based on any log record field using query filters. |
10 | Optional: Specify a name for the pipeline. |
11 | Specify which log types to forward by using the pipeline: application, infrastructure , or audit . |
12 | Specify the name of the output to use when forwarding logs with this pipeline. |
Because Loki requires log streams to be correctly ordered by timestamp, |
Create the CR object:
$ oc create -f <file-name>.yaml
If the Log Forwarder API forwards a large block of messages that exceeds the rate limit to Loki, Loki generates rate limit (429
) errors.
These errors can occur during normal operation. For example, when adding the logging to a cluster that already has some logs, rate limit errors might occur while the logging tries to ingest all of the existing log entries. In this case, if the rate of addition of new logs is less than the total rate limit, the historical data is eventually ingested, and the rate limit errors are resolved without requiring user intervention.
In cases where the rate limit errors continue to occur, you can fix the issue by modifying the LokiStack
custom resource (CR).
The |
The Log Forwarder API is configured to forward logs to Loki.
Your system sends a block of messages that is larger than 2 MB to Loki. For example:
"values":[["1630410392689800468","{\"kind\":\"Event\",\"apiVersion\":\
.......
......
......
......
\"received_at\":\"2021-08-31T11:46:32.800278+00:00\",\"version\":\"1.7.4 1.6.0\"}},\"@timestamp\":\"2021-08-31T11:46:32.799692+00:00\",\"viaq_index_name\":\"audit-write\",\"viaq_msg_id\":\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\",\"log_type\":\"audit\"}"]]}]}
After you enter oc logs -n openshift-logging -l component=collector
, the collector logs in your cluster show a line containing one of the following error messages:
429 Too Many Requests Ingestion rate limit exceeded
2023-08-25T16:08:49.301780Z WARN sink{component_kind="sink" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true
2023-08-30 14:52:15 +0000 [warn]: [default_loki_infra] failed to flush the buffer. retry_times=2 next_retry_time=2023-08-30 14:52:19 +0000 chunk="604251225bf5378ed1567231a1c03b8b" error_class=Fluent::Plugin::LokiOutput::LogPostError error="429 Too Many Requests Ingestion rate limit exceeded for user infrastructure (limit: 4194304 bytes/sec) while attempting to ingest '4082' lines totaling '7820025' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased\n"
The error is also visible on the receiving end. For example, in the LokiStack ingester pod:
level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err="rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream
Update the ingestionBurstSize
and ingestionRate
fields in the LokiStack
CR:
apiVersion: loki.grafana.com/v1
kind: LokiStack
metadata:
name: logging-loki
namespace: openshift-logging
spec:
limits:
global:
ingestion:
ingestionBurstSize: 16 (1)
ingestionRate: 8 (2)
# ...
1 | The ingestionBurstSize field defines the maximum local rate-limited sample size per distributor replica in MB. This value is a hard limit. Set this value to at least the maximum logs size expected in a single push request. Single requests that are larger than the ingestionBurstSize value are not permitted. |
2 | The ingestionRate field is a soft limit on the maximum amount of ingested samples per second in MB. Rate limit errors occur if the rate of logs exceeds the limit, but the collector retries sending the logs. As long as the total average is lower than the limit, the system recovers and errors are resolved without user intervention. |
You can forward logs to Google Cloud Logging in addition to, or instead of, the internal default OpenShift Container Platform log store.
Using this feature with Fluentd is not supported. |
Red Hat OpenShift Logging Operator 5.5.1 and later
Create a secret using your Google service account key.
$ oc -n openshift-logging create secret generic gcp-secret --from-file google-application-credentials.json=<your_service_account_key_file.json>
Create a ClusterLogForwarder
Custom Resource YAML using the template below:
apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogForwarder"
metadata:
name: "instance"
namespace: "openshift-logging"
spec:
outputs:
- name: gcp-1
type: googleCloudLogging
secret:
name: gcp-secret
googleCloudLogging:
projectId : "openshift-gce-devel" (1)
logId : "app-gcp" (2)
pipelines:
- name: test-app
inputRefs: (3)
- application
outputRefs:
- gcp-1
1 | Set either a projectId , folderId , organizationId , or billingAccountId field and its corresponding value, depending on where you want to store your logs in the GCP resource hierarchy. |
2 | Set the value to add to the logName field of the Log Entry. |
3 | Specify which log types to forward by using the pipeline: application , infrastructure , or audit . |
You can forward logs to the Splunk HTTP Event Collector (HEC) in addition to, or instead of, the internal default OpenShift Container Platform log store.
Using this feature with Fluentd is not supported. |
Red Hat OpenShift Logging Operator 5.6 or later
A ClusterLogging
instance with vector
specified as the collector
Base64 encoded Splunk HEC token
Create a secret using your Base64 encoded Splunk HEC token.
$ oc -n openshift-logging create secret generic vector-splunk-secret --from-literal hecToken=<HEC_Token>
Create or edit the ClusterLogForwarder
Custom Resource (CR) using the template below:
apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogForwarder"
metadata:
name: "instance" (1)
namespace: "openshift-logging" (2)
spec:
outputs:
- name: splunk-receiver (3)
secret:
name: vector-splunk-secret (4)
type: splunk (5)
url: <http://your.splunk.hec.url:8088> (6)
pipelines: (7)
- inputRefs:
- application
- infrastructure
name: (8)
outputRefs:
- splunk-receiver (9)
1 | The name of the ClusterLogForwarder CR must be instance . |
2 | The namespace for the ClusterLogForwarder CR must be openshift-logging . |
3 | Specify a name for the output. |
4 | Specify the name of the secret that contains your HEC token. |
5 | Specify the output type as splunk . |
6 | Specify the URL (including port) of your Splunk HEC. |
7 | Specify which log types to forward by using the pipeline: application , infrastructure , or audit . |
8 | Optional: Specify a name for the pipeline. |
9 | Specify the name of the output to use when forwarding logs with this pipeline. |
Forwarding logs over HTTP is supported for both the Fluentd and Vector log collectors. To enable, specify http
as the output type in the ClusterLogForwarder
custom resource (CR).
Create or edit the ClusterLogForwarder
CR using the template below:
apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogForwarder"
metadata:
name: "instance"
namespace: "openshift-logging"
spec:
outputs:
- name: httpout-app
type: http
url: (1)
http:
headers: (2)
h1: v1
h2: v2
method: POST
secret:
name: (3)
tls:
insecureSkipVerify: (4)
pipelines:
- name:
inputRefs:
- application
outputRefs:
- (5)
1 | Destination address for logs. |
2 | Additional headers to send with the log record. |
3 | Secret name for destination credentials. |
4 | Values are either true or false . |
5 | This value should be the same as the output name. |
You can use the Cluster Log Forwarder to send a copy of the application logs from specific projects to an external log aggregator. You can do this in addition to, or instead of, using the default Elasticsearch log store. You must also configure the external log aggregator to receive log data from OpenShift Container Platform.
To configure forwarding application logs from a project, you must create a ClusterLogForwarder
custom resource (CR) with at least one input from a project, optional outputs for other log aggregators, and pipelines that use those inputs and outputs.
You must have a logging server that is configured to receive the logging data using the specified protocol or format.
Create or edit a YAML file that defines the ClusterLogForwarder
CR object:
apiVersion: logging.openshift.io/v1
kind: ClusterLogForwarder
metadata:
name: instance (1)
namespace: openshift-logging (2)
spec:
outputs:
- name: fluentd-server-secure (3)
type: fluentdForward (4)
url: 'tls://fluentdserver.security.example.com:24224' (5)
secret: (6)
name: fluentd-secret
- name: fluentd-server-insecure
type: fluentdForward
url: 'tcp://fluentdserver.home.example.com:24224'
inputs: (7)
- name: my-app-logs
application:
namespaces:
- my-project
pipelines:
- name: forward-to-fluentd-insecure (8)
inputRefs: (9)
- my-app-logs
outputRefs: (10)
- fluentd-server-insecure
labels:
project: "my-project" (11)
- name: forward-to-fluentd-secure (12)
inputRefs:
- application
- audit
- infrastructure
outputRefs:
- fluentd-server-secure
- default
labels:
clusterId: "C1234"
1 | The name of the ClusterLogForwarder CR must be instance . |
2 | The namespace for the ClusterLogForwarder CR must be openshift-logging . |
3 | Specify a name for the output. |
4 | Specify the output type: elasticsearch , fluentdForward , syslog , or kafka . |
5 | Specify the URL and port of the external log aggregator as a valid absolute URL. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address. |
6 | If using a tls prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the openshift-logging project and have tls.crt, tls.key, and ca-bundle.crt keys that each point to the certificates they represent. |
7 | Configuration for an input to filter application logs from the specified projects. |
8 | Configuration for a pipeline to use the input to send project application logs to an external Fluentd instance. |
9 | The my-app-logs input. |
10 | The name of the output to use. |
11 | Optional: String. One or more labels to add to the logs. |
12 | Configuration for a pipeline to send logs to other log aggregators.
|
Create the CR object:
$ oc create -f <file-name>.yaml
As a cluster administrator, you can use Kubernetes pod labels to gather log data from specific pods and forward it to a log collector.
Suppose that you have an application composed of pods running alongside other pods in various namespaces. If those pods have labels that identify the application, you can gather and output their log data to a specific log collector.
To specify the pod labels, you use one or more matchLabels
key-value pairs. If you specify multiple key-value pairs, the pods must match all of them to be selected.
Create or edit a YAML file that defines the ClusterLogForwarder
CR object. In the file, specify the pod labels using simple equality-based selectors under inputs[].name.application.selector.matchLabels
, as shown in the following example.
ClusterLogForwarder
CR YAML fileapiVersion: logging.openshift.io/v1
kind: ClusterLogForwarder
metadata:
name: instance (1)
namespace: openshift-logging (2)
spec:
pipelines:
- inputRefs: [ myAppLogData ] (3)
outputRefs: [ default ] (4)
inputs: (5)
- name: myAppLogData
application:
selector:
matchLabels: (6)
environment: production
app: nginx
namespaces: (7)
- app1
- app2
outputs: (8)
- default
...
1 | The name of the ClusterLogForwarder CR must be instance . |
2 | The namespace for the ClusterLogForwarder CR must be openshift-logging . |
3 | Specify one or more comma-separated values from inputs[].name . |
4 | Specify one or more comma-separated values from outputs[] . |
5 | Define a unique inputs[].name for each application that has a unique set of pod labels. |
6 | Specify the key-value pairs of pod labels whose log data you want to gather. You must specify both a key and value, not just a key. To be selected, the pods must match all the key-value pairs. |
7 | Optional: Specify one or more namespaces. |
8 | Specify one or more outputs to forward your log data to. The optional default output shown here sends log data to the internal Elasticsearch instance. |
Optional: To restrict the gathering of log data to specific namespaces, use inputs[].name.application.namespaces
, as shown in the preceding example.
Optional: You can send log data from additional applications that have different pod labels to the same pipeline.
For each unique combination of pod labels, create an additional inputs[].name
section similar to the one shown.
Update the selectors
to match the pod labels of this application.
Add the new inputs[].name
value to inputRefs
. For example:
- inputRefs: [ myAppLogData, myOtherAppLogData ]
Create the CR object:
$ oc create -f <file-name>.yaml
For more information on matchLabels
in Kubernetes, see Resources that support set-based requirements.
When you create a ClusterLogForwarder
custom resource (CR), if the Red Hat OpenShift Logging Operator does not redeploy the Fluentd pods automatically, you can delete the Fluentd pods to force them to redeploy.
You have created a ClusterLogForwarder
custom resource (CR) object.
Delete the Fluentd pods to force them to redeploy.
$ oc delete pod --selector logging-infra=collector