×

OpenShift Container Platform supports the Elasticsearch rolling cluster restart. A rolling restart applies appropriate changes to the Elasticsearch cluster without down time (if three masters are configured). The Elasticsearch cluster remains online and operational, with nodes taken offline one at a time.

Performing an Elasticsearch rolling cluster restart

Perform a rolling restart when you change the elasticsearch configmap or any of the elasticsearch-* deployment configurations.

Also, a rolling restart is recommended if the nodes on which an Elasticsearch pod runs requires a reboot.

Prerequisite
  • Cluster logging and Elasticsearch must be installed.

Procedure

To perform a rolling cluster restart:

  1. Change to the openshift-logging project:

    $ oc project openshift-logging
  2. Use the following command to extract the CA certificate from Elasticsearch and write to the admin-ca file:

    $ oc extract secret/elasticsearch --to=. --keys=admin-ca
    
    admin-ca
  3. Perform a shard synced flush to ensure there are no pending operations waiting to be written to disk prior to shutting down:

    $ oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- curl -s --cacert /etc/elasticsearch/secret/admin-ca --cert /etc/elasticsearch/secret/admin-cert --key /etc/elasticsearch/secret/admin-key -XPOST 'https://localhost:9200/_flush/synced'

    For example:

    $ oc exec -c elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -- curl -s --cacert /etc/elasticsearch/secret/admin-ca --cert /etc/elasticsearch/secret/admin-cert --key /etc/elasticsearch/secret/admin-key -XPOST 'https://localhost:9200/_flush/synced'
  4. Prevent shard balancing when purposely bringing down nodes using the OpenShift Container Platform es_util tool:

    $ oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=_cluster/settings -XPUT 'https://localhost:9200/_cluster/settings' -d '{ "transient": { "cluster.routing.allocation.enable" : "none" } }'

    For example:

    $ oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=_cluster/settings?pretty=true -XPUT 'https://localhost:9200/_cluster/settings' -d '{ "transient": { "cluster.routing.allocation.enable" : "none" } }'
    
    {
      "acknowledged" : true,
      "persistent" : { },
      "transient" : {
        "cluster" : {
          "routing" : {
            "allocation" : {
              "enable" : "none"
            }
          }
        }
      }
    }
  5. Once complete, for each deployment you have for an ES cluster:

    1. By default, the OpenShift Container Platform Elasticsearch cluster blocks rollouts to their nodes. Use the following command to allow rollouts and allow the pod to pick up the changes:

      $ oc rollout resume deployment/<deployment-name>

      For example:

      $ oc rollout resume deployment/elasticsearch-cdm-0-1
      deployment.extensions/elasticsearch-cdm-0-1 resumed

      A new pod is deployed. Once the pod has a ready container, you can move on to the next deployment.

      $ oc get pods | grep elasticsearch-*
      
      NAME                                            READY   STATUS    RESTARTS   AGE
      elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6k    2/2     Running   0          22h
      elasticsearch-cdm-5ceex6ts-2-f799564cb-l9mj7    2/2     Running   0          22h
      elasticsearch-cdm-5ceex6ts-3-585968dc68-k7kjr   2/2     Running   0          22h
    2. Once complete, reset the pod to disallow rollouts:

      $ oc rollout pause deployment/<deployment-name>

      For example:

      $ oc rollout pause deployment/elasticsearch-cdm-0-1
      
      deployment.extensions/elasticsearch-cdm-0-1 paused
    3. Check that the Elasticsearch cluster is in green state:

      $ oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=_cluster/health?pretty=true

      If you performed a rollout on the Elasticsearch pod you used in the previous commands, the pod no longer exists and you need a new pod name here.

      For example:

      $ oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=_cluster/health?pretty=true
      
      {
        "cluster_name" : "elasticsearch",
        "status" : "green", (1)
        "timed_out" : false,
        "number_of_nodes" : 3,
        "number_of_data_nodes" : 3,
        "active_primary_shards" : 8,
        "active_shards" : 16,
        "relocating_shards" : 0,
        "initializing_shards" : 0,
        "unassigned_shards" : 1,
        "delayed_unassigned_shards" : 0,
        "number_of_pending_tasks" : 0,
        "number_of_in_flight_fetch" : 0,
        "task_max_waiting_in_queue_millis" : 0,
        "active_shards_percent_as_number" : 100.0
      }
      1 Make sure this parameter is green before proceeding.
  6. If you changed the Elasticsearch configuration map, repeat these steps for each Elasticsearch pod.

  7. Once all the deployments for the cluster have been rolled out, re-enable shard balancing:

    $ oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=_cluster/settings -XPUT 'https://localhost:9200/_cluster/settings' -d '{ "transient": { "cluster.routing.allocation.enable" : "none" } }'

    For example:

    $ oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=_cluster/settings?pretty=true -XPUT 'https://localhost:9200/_cluster/settings' -d '{ "transient": { "cluster.routing.allocation.enable" : "all" } }'
    
    {
      "acknowledged" : true,
      "persistent" : { },
      "transient" : {
        "cluster" : {
          "routing" : {
            "allocation" : {
              "enable" : "all"
            }
          }
        }
      }
    }