×

You can use node selectors to deploy the Elasticsearch, Kibana, and Curator pods to different nodes.

Specifying a node for cluster logging components using node selectors

Each component specification allows the component to target a specific node.

Prerequisites
  • Cluster logging and Elasticsearch must be installed. These features are not installed by default.

Procedure
  1. Add the desired label to your nodes:

    $ oc label <resource> <name> <key>=<value>

    For example, to label a node:

    $ oc label nodes ip-10-0-142-25.ec2.internal type=elasticsearch
  2. Edit the Cluster Logging Custom Resource in the openshift-logging project:

    $ oc edit ClusterLogging instance
    
    apiVersion: "logging.openshift.io/v1"
    kind: "ClusterLogging"
    metadata:
      name: "nodeselector"
    spec:
      managementState: "Managed"
      logStore:
        type: "elasticsearch"
        elasticsearch:
          nodeSelector:  (1)
            logging: es
          nodeCount: 1
          resources:
            limits:
              memory: 2Gi
            requests:
              cpu: 200m
              memory: 2Gi
          storage:
            size: "20G"
            storageClassName: "gp2"
          redundancyPolicy: "ZeroRedundancy"
      visualization:
        type: "kibana"
        kibana:
          nodeSelector:  (2)
            logging: kibana
          replicas: 1
      curation:
        type: "curator"
        curator:
          nodeSelector:  (3)
            logging: curator
          schedule: "*/10 * * * *"
      collection:
        logs:
          type: "fluentd"
          fluentd:
            nodeSelector:  (4)
            logging: fluentd
1 Node selector for Elasticsearch.
2 Node selector for Kibana.
3 Node selector for Curator.
4 Node selector for Fluentd.

Additional resources