$ oc edit ClusterLogging instance
OpenShift Container Platform uses Kibana to display the log data collected by Fluentd and indexed by Elasticsearch.
You can scale Kibana for redundancy and configure the CPU and memory for your Kibana nodes.
You must set cluster logging to Unmanaged state before performing these configurations, unless otherwise noted. For more information, see Changing the cluster logging management state. Operators in an unmanaged state are unsupported and the cluster administrator assumes full control of the individual component configurations and upgrades. For more information, see Support policy for unmanaged Operators. |
Each component specification allows for adjustments to both the CPU and memory limits.
Edit the ClusterLogging
custom resource (CR) in the openshift-logging
project:
$ oc edit ClusterLogging instance
apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogging"
metadata:
name: "instance"
....
spec:
visualization:
type: "kibana"
kibana:
replicas:
resources: (1)
limits:
memory: 1Gi
requests:
cpu: 500m
memory: 1Gi
proxy: (2)
resources:
limits:
memory: 100Mi
requests:
cpu: 100m
memory: 100Mi
1 | Specify the CPU and memory limits to allocate for each node. |
2 | Specify the CPU and memory limits to allocate to the Kibana proxy. |
You can scale the Kibana deployment for redundancy.
Edit the ClusterLogging
custom resource (CR) in the openshift-logging
project:
$ oc edit ClusterLogging instance
$ oc edit ClusterLogging instance
apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogging"
metadata:
name: "instance"
....
spec:
visualization:
type: "kibana"
kibana:
replicas: 1 (1)
1 | Specify the number of Kibana nodes. |
You can control which nodes the Kibana pods run on and prevent other workloads from using those nodes by using tolerations on the pods.
You apply tolerations to the Kibana pods through the ClusterLogging
custom resource (CR)
and apply taints to a node through the node specification. A taint on a node is a key:value pair
that
instructs the node to repel all pods that do not tolerate the taint. Using a specific key:value
pair
that is not on other pods ensures only the Kibana pod can run on that node.
Cluster logging and Elasticsearch must be installed.
Use the following command to add a taint to a node where you want to schedule the Kibana pod:
$ oc adm taint nodes <node-name> <key>=<value>:<effect>
For example:
$ oc adm taint nodes node1 kibana=node:NoExecute
This example places a taint on node1
that has key kibana
, value node
, and taint effect NoExecute
.
You must use the NoExecute
taint effect. NoExecute
schedules only pods that match the taint and remove existing pods
that do not match.
Edit the visualization
section of the ClusterLogging
custom resource (CR) to configure a toleration for the Kibana pod:
visualization:
type: "kibana"
kibana:
tolerations:
- key: "kibana" (1)
operator: "Exists" (2)
effect: "NoExecute" (3)
tolerationSeconds: 6000 (4)
1 | Specify the key that you added to the node. |
2 | Specify the Exists operator to require the key /value /effect parameters to match. |
3 | Specify the NoExecute effect. |
4 | Optionally, specify the tolerationSeconds parameter to set how long a pod can remain bound to a node before being evicted. |
This toleration matches the taint created by the oc adm taint
command. A pod with this toleration would be able to schedule onto node1
.
Kibana’s Visualize tab enables you to create visualizations and dashboards for
monitoring container logs, allowing administrator users (cluster-admin
or
cluster-reader
) to view logs by deployment, namespace, pod, and container.
To load dashboards and other Kibana UI objects:
If necessary, get the Kibana route, which is created by default upon installation of the Cluster Logging Operator:
$ oc get routes -n openshift-logging NAMESPACE NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD openshift-logging kibana kibana-openshift-logging.apps.openshift.com kibana <all> reencrypt/Redirect None
Get the name of your Elasticsearch pods.
$ oc get pods -l component=elasticsearch NAME READY STATUS RESTARTS AGE elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6k 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-2-f799564cb-l9mj7 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-3-585968dc68-k7kjr 2/2 Running 0 22h
Create the necessary per-user configuration that this procedure requires:
Log in to the Kibana dashboard as the user you want to add the dashboards to.
https://kibana-openshift-logging.apps.openshift.com (1)
1 | Where the URL is Kibana route. |
If the Authorize Access page appears, select all permissions and click Allow selected permissions.
Log out of the Kibana dashboard.
Run the following command from the project where the pod is located using the name of any of your Elastiscearch pods:
$ oc exec <es-pod> -- es_load_kibana_ui_objects <user-name>
For example:
$ oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6k -- es_load_kibana_ui_objects <user-name>
The metadata of the Kibana objects such as visualizations, dashboards, and so forth are stored in Elasticsearch with the .kibana.{user_hash} index format. You can obtain the user_hash using the Any custom dashboard can be imported for a particular user either by using the import/export feature or by inserting the metadata onto the Elasticsearch index using the curl command. |