You can send Elasticsearch logs to external devices, such as an externally-hosted Elasticsearch instance or an external syslog server. You can also configure Fluentd to send logs to an external log aggregator.

You must set cluster logging to Unmanaged state before performing these configurations, unless otherwise noted. For more information, see Changing the cluster logging management state.

Configuring the log collector to send logs to an external Elasticsearch instance

The log collector sends logs to the value of the ES_HOST, ES_PORT, OPS_HOST, and OPS_PORT environment variables of the Elasticsearch deployment configuration. The application logs are directed to the ES_HOST destination, and operations logs to OPS_HOST.

Sending logs directly to an AWS Elasticsearch instance is not supported. Use Fluentd Secure Forward to direct logs to an instance of Fluentd that you control and that is configured with the fluent-plugin-aws-elasticsearch-service plug-in.

  • Cluster logging and Elasticsearch must be installed.

  • Set cluster logging to the unmanaged state.


To direct logs to a specific Elasticsearch instance:

  1. Edit the fluentd DaemonSet in the openshift-logging project:

    $ oc edit ds/fluentd
              - name: ES_HOST
                value: elasticsearch
              - name: ES_PORT
                value: '9200'
              - name: ES_CLIENT_CERT
                value: /etc/fluent/keys/app-cert
              - name: ES_CLIENT_KEY
                value: /etc/fluent/keys/app-key
              - name: ES_CA
                value: /etc/fluent/keys/app-ca
              - name: OPS_HOST
                value: elasticsearch
              - name: OPS_PORT
                value: '9200'
              - name: OPS_CLIENT_CERT
                value: /etc/fluent/keys/infra-cert
              - name: OPS_CLIENT_KEY
                value: /etc/fluent/keys/infra-key
              - name: OPS_CA
                value: /etc/fluent/keys/infra-ca
  2. Set ES_HOST and OPS_HOST to the same destination, while ensuring that ES_PORT and OPS_PORT also have the same value for an external Elasticsearch instance to contain both application and operations logs.

  3. Configure your externally-hosted Elasticsearch instance for TLS. Only externally-hosted Elasticsearch instances that use Mutual TLS are allowed.

If you are not using the provided Kibana and Elasticsearch images, you will not have the same multi-tenant capabilities and your data will not be restricted by user access to a particular project.

Configuring log collector to send logs to an external syslog server

Use the fluent-plugin-remote-syslog plug-in on the host to send logs to an external syslog server.


Set cluster logging to the unmanaged state.

  1. Set environment variables in the fluentd daemonset in the openshift-logging project:

            - name: fluentd
              image: ''
                - name: REMOTE_SYSLOG_HOST (1)
                  value: host1
                - name: REMOTE_SYSLOG_HOST_BACKUP
                  value: host2
                - name: REMOTE_SYSLOG_PORT_BACKUP
                  value: 5555
    1 The desired remote syslog host. Required for each host.

    This will build two destinations. The syslog server on host1 will be receiving messages on the default port of 514, while host2 will be receiving the same messages on port 5555.

  2. Alternatively, you can configure your own custom the fluentd daemonset in the openshift-logging project.

    Fluentd Environment Variables

    Parameter Description


    Defaults to false. Set to true to enable use of the fluent-plugin-remote-syslog gem


    (Required) Hostname or IP address of the remote syslog server.


    Port number to connect on. Defaults to 514.


    Set the syslog severity level. Defaults to debug.


    Set the syslog facility. Defaults to local0.


    Defaults to false. Set to true to use the record’s severity and facility fields to set on the syslog message.


    Removes the prefix from the tag, defaults to '' (empty).


    If specified, uses this field as the key to look on the record, to set the tag on the syslog message.


    If specified, uses this field as the key to look on the record, to set the payload on the syslog message.

    This implementation is insecure, and should only be used in environments where you can guarantee no snooping on the connection.

Configuring Fluentd to send logs to an external log aggregator

You can configure Fluentd to send a copy of its logs to an external log aggregator, and not the default Elasticsearch, using the secure-forward plug-in. From there, you can further process log records after the locally hosted Fluentd has processed them.

The logging deployment provides a secure-forward.conf section in the Fluentd configmap for configuring the external aggregator:


To send a copy of Fluentd logs to an external log aggregator:

  1. Edit the secure-forward.conf section of the Fluentd configuration map:

    Sample secure-forward.conf section
    $ oc edit configmap/fluentd -n openshift-logging
      @type forward
      <server> (1)
        name externalserver1
        port 24224
      <server> (1)
        name externalserver2
        port 24224
    1 Enter the name, host, and port for your external Fluentd server.
  2. Add certificates to be used in secure-forward.conf to the existing secret that is mounted on the Fluentd pods. The your_ca_cert and your_private_key values must match what is specified in secure-forward.conf in the fluentd ConfigMap:

    $ oc patch secrets/fluentd --type=json \
      --patch "[{'op':'add','path':'/data/your_ca_cert','value':'$(base64 /path/to/your_ca_cert.pem)'}]"
    $ oc patch secrets/fluentd --type=json \
      --patch "[{'op':'add','path':'/data/your_private_key','value':'$(base64 /path/to/your_private_key.pem)'}]"

    Replace your_private_key with a generic name. This is a link to the JSON path, not a path on your host system.

    When configuring the external aggregator, it must be able to accept messages securely from Fluentd.

    • If using Fluentd 1.0 or later, configure the built-in in_forward plug-in with the appropriate security parameters.

      In Fluentd 1.0 and later, in_forward implements the server (receiving) side, and out_forward implements the client (sending) side.

      For Fluentd versions 1.0 or higher, you can find further explanation of how to set up the inforward plugin and the out_forward plugin.

    • If using Fluentd 0.12 or earlier, you must have the fluent-plugin-secure-forward plug-in installed and make use of the input plug-in it provides. In Fluentd 0.12, the same fluent-plugin-secure-forward plugin implements both the client (sending) side and the server (receiving) side.

      For Fluentd 0.12 you can find further explanation of fluent-plugin-secure-forward plug-in in fluent-plugin-secure-forward repository.

      The following is an example of a in_forward configuration for Fluentd 0.12:

      secure-forward.conf: |
        # <store>
        # @type secure_forward
        # self_hostname ${hostname}
        # shared_key <SECRET_STRING>
        # secure yes
        # enable_strict_verification yes
        # ca_cert_path /etc/fluent/keys/your_ca_cert
        # ca_private_key_path /etc/fluent/keys/your_private_key
          # for private CA secret key
        # ca_private_key_passphrase passphrase
          host  # or IP
          # port 24284
        # <server>
          # ip address to connect
        #   host
          # specify hostlabel for FQDN verification if ipaddress is used for host
        #   hostlabel
        # </server>
        # </store>