$ oc edit ClusterLogging instance
You can use node selectors to deploy the Elasticsearch, Kibana, and Curator pods to different nodes.
You can configure the Cluster Logging Operator to deploy the pods for any or all of the Cluster Logging components, Elasticsearch, Kibana, and Curator to different nodes. You cannot move the Cluster Logging Operator pod from its installed location.
For example, you can move the Elasticsearch pods to a separate node because of high CPU, memory, and disk requirements.
You should set your machine set to use at least 6 replicas. |
Cluster logging and Elasticsearch must be installed. These features are not installed by default.
Edit the ClusterLogging
custom resource (CR) in the openshift-logging
project:
$ oc edit ClusterLogging instance
apiVersion: logging.openshift.io/v1 kind: ClusterLogging .... spec: collection: logs: fluentd: resources: null type: fluentd curation: curator: nodeSelector: (1) node-role.kubernetes.io/infra: '' resources: null schedule: 30 3 * * * type: curator logStore: elasticsearch: nodeCount: 3 nodeSelector: (1) node-role.kubernetes.io/infra: '' redundancyPolicy: SingleRedundancy resources: limits: cpu: 500m memory: 16Gi requests: cpu: 500m memory: 16Gi storage: {} type: elasticsearch managementState: Managed visualization: kibana: nodeSelector: (1) node-role.kubernetes.io/infra: '' (1) proxy: resources: null replicas: 1 resources: null type: kibana ....
1 | Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. |
To verify that a component has moved, you can use the oc get pod -o wide
command.
For example:
You want to move the Kibana pod from the ip-10-0-147-79.us-east-2.compute.internal
node:
$ oc get pod kibana-5b8bdf44f9-ccpq9 -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-5b8bdf44f9-ccpq9 2/2 Running 0 27s 10.129.2.18 ip-10-0-147-79.us-east-2.compute.internal <none> <none>
You want to move the Kibana Pod to the ip-10-0-139-48.us-east-2.compute.internal
node, a dedicated infrastructure node:
$ oc get nodes NAME STATUS ROLES AGE VERSION ip-10-0-133-216.us-east-2.compute.internal Ready master 60m v1.17.1 ip-10-0-139-146.us-east-2.compute.internal Ready master 60m v1.17.1 ip-10-0-139-192.us-east-2.compute.internal Ready worker 51m v1.17.1 ip-10-0-139-241.us-east-2.compute.internal Ready worker 51m v1.17.1 ip-10-0-147-79.us-east-2.compute.internal Ready worker 51m v1.17.1 ip-10-0-152-241.us-east-2.compute.internal Ready master 60m v1.17.1 ip-10-0-139-48.us-east-2.compute.internal Ready infra 51m v1.17.1
Note that the node has a node-role.kubernetes.io/infra: ''
label:
$ oc get node ip-10-0-139-48.us-east-2.compute.internal -o yaml kind: Node apiVersion: v1 metadata: name: ip-10-0-139-48.us-east-2.compute.internal selfLink: /api/v1/nodes/ip-10-0-139-48.us-east-2.compute.internal uid: 62038aa9-661f-41d7-ba93-b5f1b6ef8751 resourceVersion: '39083' creationTimestamp: '2020-04-13T19:07:55Z' labels: node-role.kubernetes.io/infra: '' ....
To move the Kibana pod, edit the ClusterLogging
CR to add a node selector:
apiVersion: logging.openshift.io/v1 kind: ClusterLogging .... spec: .... visualization: kibana: nodeSelector: (1) node-role.kubernetes.io/infra: '' (1) proxy: resources: null replicas: 1 resources: null type: kibana
1 | Add a node selector to match the label in the node specification. |
After you save the CR, the current Kibana pod is terminated and new pod is deployed:
$ oc get pods NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 29m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 28m fluentd-42dzz 1/1 Running 0 28m fluentd-d74rq 1/1 Running 0 28m fluentd-m5vr9 1/1 Running 0 28m fluentd-nkxl7 1/1 Running 0 28m fluentd-pdvqb 1/1 Running 0 28m fluentd-tflh6 1/1 Running 0 28m kibana-5b8bdf44f9-ccpq9 2/2 Terminating 0 4m11s kibana-7d85dcffc8-bfpfp 2/2 Running 0 33s
The new pod is on the ip-10-0-139-48.us-east-2.compute.internal
node:
$ oc get pod kibana-7d85dcffc8-bfpfp -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-7d85dcffc8-bfpfp 2/2 Running 0 43s 10.131.0.22 ip-10-0-139-48.us-east-2.compute.internal <none> <none>
After a few moments, the original Kibana pod is removed.
$ oc get pods NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 30m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 29m fluentd-42dzz 1/1 Running 0 29m fluentd-d74rq 1/1 Running 0 29m fluentd-m5vr9 1/1 Running 0 29m fluentd-nkxl7 1/1 Running 0 29m fluentd-pdvqb 1/1 Running 0 29m fluentd-tflh6 1/1 Running 0 29m kibana-7d85dcffc8-bfpfp 2/2 Running 0 62s