spec: managementState: "Managed"
Before installing cluster logging into your cluster, review the following sections.
OpenShift Container Platform cluster logging is designed to be used with the default configuration, which is tuned for small to medium sized OpenShift Container Platform clusters.
The installation instructions that follow include a sample Cluster Logging Custom Resource (CR), which you can use to create a cluster logging instance and configure your cluster logging deployment.
If you want to use the default cluster logging install, you can use the sample CR directly.
If you want to customize your deployment, make changes to the sample CR as needed. The following describes the configurations you can make when installing your cluster logging instance or modify after installtion. See the Configuring sections for more information on working with each component, including modifications you can make outside of the Cluster Logging Custom Resource.
You can configure your cluster logging environment by modifying the Cluster Logging Custom Resource deployed
in the openshift-logging
project.
You can modify any of the following components upon install or after install
The Cluster Logging Operator and Elasticsearch Operator can be in a Managed or Unmanaged state.
In managed state, the Cluster Logging Operator (CLO) responds to changes in the Cluster Logging Custom Resource (CR) and attempts to update the cluster to match the CR.
In order to modify certain components managed by the Cluster Logging Operator or the Elasticsearch Operator, you must set the operator to the unmanaged state.
In Unmanaged state, the operators do not respond to changes in the CRs. The administrator assumes full control of individual component configurations and upgrades when in unmanaged state.
The OpenShift Container Platform documentation indicates in a prerequisite step when you must set the cluster to Unmanaged. |
spec: managementState: "Managed"
The OpenShift Container Platform documentation indicates in a prerequisite step when you must set the cluster to Unmanaged.
An unmanaged deployment will not receive updates until the |
You can adjust both the CPU and memory limits for each component by modifying the resources
block with valid memory and CPU values:
spec: logStore: elasticsearch: resources: limits: cpu: memory: requests: cpu: 1 memory: 16Gi type: "elasticsearch" collection: logs: fluentd: resources: limits: cpu: memory: requests: cpu: memory: type: "fluentd" visualization: kibana: resources: limits: cpu: memory: requests: cpu: memory: type: kibana curation: curator: resources: limits: memory: 200Mi requests: cpu: 200m memory: 200Mi type: "curator"
You can configure a persistent storage class and size for the Elasticsearch cluster using the storageClass
name
and size
parameters. The Cluster Logging Operator creates a PersistentVolumeClaim
for each data node in the Elasticsearch cluster based on these parameters.
spec: logStore: type: "elasticsearch" elasticsearch: storage: storageClassName: "gp2" size: "200G"
This example specifies each data node in the cluster will be bound to a PersistentVolumeClaim
that
requests "200G" of "gp2" storage. Each primary shard will be backed by a single replica.
Omitting the spec: logStore: type: "elasticsearch" elasticsearch: storage: {} |
You can set the policy that defines how Elasticsearch shards are replicated across data nodes in the cluster:
FullRedundancy
. The shards for each index are fully replicated to every data node.
MultipleRedundancy
. The shards for each index are spread over half of the data nodes.
SingleRedundancy
. A single copy of each shard. Logs are always available and recoverable as long as at least two data nodes exist.
ZeroRedundancy
. No copies of any shards. Logs may be unavailable (or lost) in the event a node is down or fails.
You specify the schedule for Curator in the [cron format](https://en.wikipedia.org/wiki/Cron).
spec: curation: type: "curator" resources: curator: schedule: "30 3 * * *"
The following is an example of a Cluster Logging Custom Resource modified using the options previously described.
apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: "openshift-logging" spec: managementState: "Managed" logStore: type: "elasticsearch" elasticsearch: nodeCount: 2 resources: limits: memory: 2Gi requests: cpu: 200m memory: 2Gi storage: {} redundancyPolicy: "SingleRedundancy" visualization: type: "kibana" kibana: resources: limits: memory: 1Gi requests: cpu: 500m memory: 1Gi replicas: 1 curation: type: "curator" curator: resources: limits: memory: 200Mi requests: cpu: 200m memory: 200Mi schedule: "*/5 * * * *" collection: logs: type: "fluentd" fluentd: resources: limits: memory: 1Gi requests: cpu: 200m memory: 1Gi
A persistent volume is required for each Elasticsearch deployment to have one data volume per data node. On OpenShift Container Platform this is achieved using Persistent Volume Claims.
The Elasticsearch Operator names the PVCs using the Elasticsearch resource name. Refer to Persistent Elasticsearch Storage for more details.
Fluentd ships any logs from systemd journal and /var/log/containers/ to Elasticsearch.
Therefore, consider how much data you need in advance and that you are aggregating application log data. Some Elasticsearch users have found that it is necessary to keep absolute storage consumption around 50% and below 70% at all times. This helps to avoid Elasticsearch becoming unresponsive during large merge operations.
By default, at 85% Elasticsearch stops allocating new data to the node, at 90% Elasticsearch attempts to relocate existing shards from that node to other nodes if possible. But if no nodes have free capacity below 85%, Elasticsearch effectively rejects creating new indices and becomes RED.
These low and high watermark values are Elasticsearch defaults in the current release. You can modify these values, but you also must apply any modifications to the alerts also. The alerts are based on these defaults. |
For more information on installing operators,see Installing Operators from the OperatorHub.