This glossary defines common terms that are used in the logging subsystem documentation.
You can use annotations to attach metadata to objects.
The Red Hat OpenShift Logging Operator provides a set of APIs to control the collection and forwarding of application, infrastructure, and audit logs.
A CR is an extension of the Kubernetes API. To configure the logging subsystem and log forwarding, you can customize the ClusterLogging
and the ClusterLogForwarder
custom resources.
The event router is a pod that watches OpenShift Dedicated events. It collects logs by using the logging subsystem.
Fluentd is a log collector that resides on each OpenShift Dedicated node. It gathers application, infrastructure, and audit logs and forwards them to different outputs.
Garbage collection is the process of cleaning up cluster resources, such as terminated containers and images that are not referenced by any running pods.
Elasticsearch is a distributed search and analytics engine. OpenShift Dedicated uses Elasticsearch as a default log store for the logging subsystem.
The Elasticsearch Operator is used to run an Elasticsearch cluster on OpenShift Dedicated. The Elasticsearch Operator provides self-service for the Elasticsearch cluster operations and is used by the logging subsystem.
Indexing is a data structure technique that is used to quickly locate and access data. Indexing optimizes the performance by minimizing the amount of disk access required when a query is processed.
The Log Forwarding API enables you to parse JSON logs into a structured object and forward them to either the logging subsystem managed Elasticsearch or any other third-party system supported by the Log Forwarding API.
Kibana is a browser-based console interface to query, discover, and visualize your Elasticsearch data through histograms, line graphs, and pie charts.
Kubernetes API server validates and configures data for the API objects.
Labels are key-value pairs that you can use to organize and select subsets of objects, such as a pod.
With the logging subsystem, you can aggregate application, infrastructure, and audit logs throughout your cluster. You can also store them to a default log store, forward them to third party systems, and query and visualize the stored logs in the default log store.
A logging collector collects logs from the cluster, formats them, and forwards them to the log store or third party systems.
A log store is used to store aggregated logs. You can use an internal log store or forward logs to external log stores.
Log visualizer is the user interface (UI) component you can use to view information such as logs, graphs, charts, and other metrics.
A node is a worker machine in the OpenShift Dedicated cluster. A node is either a virtual machine (VM) or a physical machine.
Operators are the preferred method of packaging, deploying, and managing a Kubernetes application in an OpenShift Dedicated cluster. An Operator takes human operational knowledge and encodes it into software that is packaged and shared with customers.
A pod is the smallest logical unit in Kubernetes. A pod consists of one or more containers and runs on a worker node.
RBAC is a key security control to ensure that cluster users and workloads have access only to resources required to execute their roles.
Elasticsearch organizes log data from Fluentd into datastores, or indices, then subdivides each index into multiple pieces called shards.
Taints ensure that pods are scheduled onto appropriate nodes. You can apply one or more taints on a node.
You can apply tolerations to pods. Tolerations allow the scheduler to schedule pods with matching taints.
A user interface (UI) to manage OpenShift Dedicated. The web console for OpenShift Dedicated can be found at https://console.redhat.com/openshift.