You can send Elasticsearch logs to external devices, such as an externally-hosted Elasticsearch instance or an external syslog server. You can also configure Fluentd to send logs to an external log aggregator.

Procedures in this topic require your cluster to be in an unmanaged state. For more information, see Changing the cluster logging management state.

Configuring Fluentd to send logs to an external Elasticsearch instance

Fluentd sends logs to the value of the ES_HOST, ES_PORT, OPS_HOST, and OPS_PORT environment variables of the Elasticsearch deployment configuration. The application logs are directed to the ES_HOST destination, and operations logs to OPS_HOST.

Sending logs directly to an AWS Elasticsearch instance is not supported. Use Fluentd Secure Forward to direct logs to an instance of Fluentd that you control and that is configured with the fluent-plugin-aws-elasticsearch-service plug-in.

Prerequisite
  • Cluster logging and Elasticsearch must be installed.

  • Set cluster logging to the unmanaged state.

Procedure

To direct logs to a specific Elasticsearch instance:

  1. Edit the fluentd DaemonSet in the openshift-logging project:

    $ oc edit ds/fluentd
    
    spec:
      template:
        spec:
          containers:
              env:
              - name: ES_HOST
                value: elasticsearch
              - name: ES_PORT
                value: '9200'
              - name: ES_CLIENT_CERT
                value: /etc/fluent/keys/app-cert
              - name: ES_CLIENT_KEY
                value: /etc/fluent/keys/app-key
              - name: ES_CA
                value: /etc/fluent/keys/app-ca
              - name: OPS_HOST
                value: elasticsearch
              - name: OPS_PORT
                value: '9200'
              - name: OPS_CLIENT_CERT
                value: /etc/fluent/keys/infra-cert
              - name: OPS_CLIENT_KEY
                value: /etc/fluent/keys/infra-key
              - name: OPS_CA
                value: /etc/fluent/keys/infra-ca
  2. Set ES_HOST and OPS_HOST to the same destination, while ensuring that ES_PORT and OPS_PORT also have the same value for an external Elasticsearch instance to contain both application and operations logs.

  3. Configure your externally hosted Elasticsearch instance for TLS:

    • If your externally hosted Elasticsearch instance does not use TLS, update the _CLIENT_CERT, _CLIENT_KEY, and _CA variables to be empty.

    • If your externally hosted Elasticsearch instance uses TLS, but not mutual TLS, update the _CLIENT_CERT and _CLIENT_KEY variables to be empty. Then patch or recreate the fluentd secret with the appropriate _CA value for communicating with your Elasticsearch instance.

    • If your externally hosted Elasticsearch instance uses Mutual TLS, patch or recreate the fluentd secret with your client key, client cert, and CA. The provided Elasticsearch instance uses mutual TLS.

If you are not using the provided Kibana and Elasticsearch images, you will not have the same multi-tenant capabilities and your data will not be restricted by user access to a particular project.

Configuring Fluentd to send logs to an external syslog server

Use the fluent-plugin-remote-syslog plug-in on the host to send logs to an external syslog server.

Prerequisite

Set cluster logging to the unmanaged state.

Procedure
  1. Set environment variables in the fluentd daemonset in the openshift-logging project:

    spec:
      template:
        spec:
          containers:
            - name: fluentd
              image: 'registry.redhat.io/openshift4/ose-logging-fluentd:v4.1'
              env:
                - name: REMOTE_SYSLOG_HOST (1)
                  value: host1
                - name: REMOTE_SYSLOG_HOST_BACKUP
                  value: host2
                - name: REMOTE_SYSLOG_PORT_BACKUP
                  value: 5555
    1 The desired remote syslog host. Required for each host.

    This will build two destinations. The syslog server on host1 will be receiving messages on the default port of 514, while host2 will be receiving the same messages on port 5555.

  2. Alternatively, you can configure your own custom the fluentd daemonset in the openshift-logging project.

    Fluentd Environment Variables

    Parameter Description

    USE_REMOTE_SYSLOG

    Defaults to false. Set to true to enable use of the fluent-plugin-remote-syslog gem

    REMOTE_SYSLOG_HOST

    (Required) Hostname or IP address of the remote syslog server.

    REMOTE_SYSLOG_PORT

    Port number to connect on. Defaults to 514.

    REMOTE_SYSLOG_SEVERITY

    Set the syslog severity level. Defaults to debug.

    REMOTE_SYSLOG_FACILITY

    Set the syslog facility. Defaults to local0.

    REMOTE_SYSLOG_USE_RECORD

    Defaults to false. Set to true to use the record’s severity and facility fields to set on the syslog message.

    REMOTE_SYSLOG_REMOVE_TAG_PREFIX

    Removes the prefix from the tag, defaults to '' (empty).

    REMOTE_SYSLOG_TAG_KEY

    If specified, uses this field as the key to look on the record, to set the tag on the syslog message.

    REMOTE_SYSLOG_PAYLOAD_KEY

    If specified, uses this field as the key to look on the record, to set the payload on the syslog message.

    This implementation is insecure, and should only be used in environments where you can guarantee no snooping on the connection.

Configuring Fluentd to send logs to an external log aggregator

You can configure Fluentd to send a copy of its logs to an external log aggregator, and not the default Elasticsearch, using the secure-forward plug-in. From there, you can further process log records after the locally hosted Fluentd has processed them.

The logging deployment provides a secure-forward.conf section in the Fluentd configmap for configuring the external aggregator:

  1. Prerequisite

Set cluster logging to the unmanaged state.

Procedure

To send a copy of Fluentd logs to an external log aggregator:

  1. Edit the secure-forward.conf section of the Fluentd configuration map:

    Sample secure-forward.conf section
    $ oc edit configmap/fluentd -n openshift-logging
    
    <store>
      @type forward
      <server> (1)
        name externalserver1
        host 192.168.1.1
        port 24224
      </server>
      <server> (1)
        name externalserver2
        host 192.168.1.2
        port 24224
      </server>
    </store>
    1 Enter the name, host, and port for your external Fluentd server.
  2. Add certificates to be used in secure-forward.conf to the existing secret that is mounted on the Fluentd pods. The your_ca_cert and your_private_key values must match what is specified in secure-forward.conf in configmap/logging-fluentd:

    $ oc patch secrets/fluentd --type=json \
      --patch "[{'op':'add','path':'/data/your_ca_cert','value':'$(base64 /path/to/your_ca_cert.pem)'}]"
    $ oc patch secrets/fluentd --type=json \
      --patch "[{'op':'add','path':'/data/your_private_key','value':'$(base64 /path/to/your_private_key.pem)'}]"

    Replace your_private_key with a generic name. This is a link to the JSON path, not a path on your host system.

    When configuring the external aggregator, it must be able to accept messages securely from Fluentd.

    • If using Fluentd 1.0 or later, configure the built-in in_forward plug-in with the appropriate security parameters.

      In Fluentd 1.0 and later, in_forward implements the server (receiving) side, and out_forward implements the client (sending) side.

      For Fluentd versions 1.0 or higher, you can find further explanation of how to set up the inforward plugin and the out_forward plugin.

    • If using Fluentd 0.12 or earlier, you must have the fluent-plugin-secure-forward plug-in installed and make use of the input plug-in it provides. In Fluentd 0.12, the same fluent-plugin-secure-forward plugin implements both the client (sending) side and the server (receiving) side.

      For Fluentd 0.12 you can find further explanation of fluent-plugin-secure-forward plug-in in fluent-plugin-secure-forward repository.

      The following is an example of a in_forward configuration for Fluentd 0.12:

      secure-forward.conf: |
        # <store>
        # @type secure_forward
      
        # self_hostname ${hostname}
        # shared_key <SECRET_STRING>
      
        # secure yes
        # enable_strict_verification yes
      
        # ca_cert_path /etc/fluent/keys/your_ca_cert
        # ca_private_key_path /etc/fluent/keys/your_private_key
          # for private CA secret key
        # ca_private_key_passphrase passphrase
      
        <server>
          host server.fqdn.example.com  # or IP
          # port 24284
        </server>
        # <server>
          # ip address to connect
        #   host 203.0.113.8
          # specify hostlabel for FQDN verification if ipaddress is used for host
        #   hostlabel server.fqdn.example.com
        # </server>
        # </store>