Metrics enable cluster administrators to monitor how OpenShift Serverless cluster components and workloads are performing.
You can view different metrics for OpenShift Serverless by navigating to Dashboards in the OpenShift Container Platform web console Administrator perspective.
See the OpenShift Container Platform documentation on Managing metrics for information about enabling metrics for your cluster.
To view metrics for Knative components on OpenShift Container Platform, you need cluster administrator permissions, and access to the web console Administrator perspective.
If Service Mesh is enabled with mTLS, metrics for Knative Serving are disabled by default because Service Mesh prevents Prometheus from scraping metrics. For information about resolving this issue, see Enabling Knative Serving metrics when using Service Mesh with mTLS. Scraping the metrics does not affect autoscaling of a Knative service, because scraping requests do not go through the activator. Consequently, no scraping takes place if no pods are running. |
The following metrics are emitted by any component that implements a controller logic. These metrics show details about reconciliation operations and the work queue behavior upon which reconciliation requests are added to the work queue.
Metric name | Description | Type | Tags | Unit |
---|---|---|---|---|
|
The depth of the work queue. |
Gauge |
|
Integer (no units) |
|
The number of reconcile operations. |
Counter |
|
Integer (no units) |
|
The latency of reconcile operations. |
Histogram |
|
Milliseconds |
|
The total number of add actions handled by the work queue. |
Counter |
|
Integer (no units) |
|
The length of time an item stays in the work queue before being requested. |
Histogram |
|
Seconds |
|
The total number of retries that have been handled by the work queue. |
Counter |
|
Integer (no units) |
|
The length of time it takes to process and item from the work queue. |
Histogram |
|
Seconds |
|
The length of time that outstanding work queue items have been in progress. |
Histogram |
|
Seconds |
|
The length of time that the longest outstanding work queue items has been in progress. |
Histogram |
|
Seconds |
Webhook metrics report useful information about operations. For example, if a large number of operations fail, this might indicate an issue with a user-created resource.
Metric name | Description | Type | Tags | Unit |
---|---|---|---|---|
|
The number of requests that are routed to the webhook. |
Counter |
|
Integer (no units) |
|
The response time for a webhook request. |
Histogram |
|
Milliseconds |
Cluster administrators can view the following metrics for Knative Eventing components.
By aggregating the metrics from HTTP code, events can be separated into two categories; successful events (2xx) and failed events (5xx).
You can use the following metrics to debug the broker ingress, see how it is performing, and see which events are being dispatched by the ingress component.
Metric name | Description | Type | Tags | Unit |
---|---|---|---|---|
|
Number of events received by a broker. |
Counter |
|
Integer (no units) |
|
The time taken to dispatch an event to a channel. |
Histogram |
|
Milliseconds |
You can use the following metrics to debug broker filters, see how they are performing, and see which events are being dispatched by the filters. You can also measure the latency of the filtering action on an event.
Metric name | Description | Type | Tags | Unit |
---|---|---|---|---|
|
Number of events received by a broker. |
Counter |
|
Integer (no units) |
|
The time taken to dispatch an event to a channel. |
Histogram |
|
Milliseconds |
|
The time it takes to process an event before it is dispatched to a trigger subscriber. |
Histogram |
|
Milliseconds |
You can use the following metrics to debug InMemoryChannel
channels, see how they are performing, and see which events are being dispatched by the channels.
Metric name | Description | Type | Tags | Unit |
---|---|---|---|---|
|
Number of events dispatched by |
Counter |
|
Integer (no units) |
|
The time taken to dispatch an event from an |
Histogram |
|
Milliseconds |
You can use the following metrics to verify that events have been delivered from the event source to the connected event sink.
Metric name | Description | Type | Tags | Unit |
---|---|---|---|---|
|
Number of events sent by the event source. |
Counter |
|
Integer (no units) |
|
Number of retried events sent by the event source after initially failing to be delivered. |
Counter |
|
Integer (no units) |
Cluster administrators can view the following metrics for Knative Serving components.
You can use the following metrics to understand how applications respond when traffic passes through the activator.
Metric name | Description | Type | Tags | Unit |
---|---|---|---|---|
|
The number of concurrent requests that are routed to the activator, or average concurrency over a reporting period. |
Gauge |
|
Integer (no units) |
|
The number of requests that are routed to activator. These are requests that have been fulfilled from the activator handler. |
Counter |
|
Integer (no units) |
|
The response time in milliseconds for a fulfilled, routed request. |
Histogram |
|
Milliseconds |
The autoscaler component exposes a number of metrics related to autoscaler behavior for each revision. For example, at any given time, you can monitor the targeted number of pods the autoscaler tries to allocate for a service, the average number of requests per second during the stable window, or whether the autoscaler is in panic mode if you are using the Knative pod autoscaler (KPA).
Metric name | Description | Type | Tags | Unit |
---|---|---|---|---|
|
The number of pods the autoscaler tries to allocate for a service. |
Gauge |
|
Integer (no units) |
|
The excess burst capacity served over the stable window. |
Gauge |
|
Integer (no units) |
|
The average number of requests for each observed pod over the stable window. |
Gauge |
|
Integer (no units) |
|
The average number of requests for each observed pod over the panic window. |
Gauge |
|
Integer (no units) |
|
The number of concurrent requests that the autoscaler tries to send to each pod. |
Gauge |
|
Integer (no units) |
|
The average number of requests-per-second for each observed pod over the stable window. |
Gauge |
|
Integer (no units) |
|
The average number of requests-per-second for each observed pod over the panic window. |
Gauge |
|
Integer (no units) |
|
The number of requests-per-second that the autoscaler targets for each pod. |
Gauge |
|
Integer (no units) |
|
This value is |
Gauge |
|
Integer (no units) |
|
The number of pods that the autoscaler has requested from the Kubernetes cluster. |
Gauge |
|
Integer (no units) |
|
The number of pods that are allocated and currently have a ready state. |
Gauge |
|
Integer (no units) |
|
The number of pods that have a not ready state. |
Gauge |
|
Integer (no units) |
|
The number of pods that are currently pending. |
Gauge |
|
Integer (no units) |
|
The number of pods that are currently terminating. |
Gauge |
|
Integer (no units) |
Each Knative Serving control plane process emits a number of Go runtime memory statistics (MemStats).
The |
Metric name | Description | Type | Tags | Unit |
---|---|---|---|---|
|
The number of bytes of allocated heap objects. This metric is the same as |
Gauge |
|
Integer (no units) |
|
The cumulative bytes allocated for heap objects. |
Gauge |
|
Integer (no units) |
|
The total bytes of memory obtained from the operating system. |
Gauge |
|
Integer (no units) |
|
The number of pointer lookups performed by the runtime. |
Gauge |
|
Integer (no units) |
|
The cumulative count of heap objects allocated. |
Gauge |
|
Integer (no units) |
|
The cumulative count of heap objects that have been freed. |
Gauge |
|
Integer (no units) |
|
The number of bytes of allocated heap objects. |
Gauge |
|
Integer (no units) |
|
The number of bytes of heap memory obtained from the operating system. |
Gauge |
|
Integer (no units) |
|
The number of bytes in idle, unused spans. |
Gauge |
|
Integer (no units) |
|
The number of bytes in spans that are currently in use. |
Gauge |
|
Integer (no units) |
|
The number of bytes of physical memory returned to the operating system. |
Gauge |
|
Integer (no units) |
|
The number of allocated heap objects. |
Gauge |
|
Integer (no units) |
|
The number of bytes in stack spans that are currently in use. |
Gauge |
|
Integer (no units) |
|
The number of bytes of stack memory obtained from the operating system. |
Gauge |
|
Integer (no units) |
|
The number of bytes of allocated |
Gauge |
|
Integer (no units) |
|
The number of bytes of memory obtained from the operating system for |
Gauge |
|
Integer (no units) |
|
The number of bytes of allocated |
Gauge |
|
Integer (no units) |
|
The number of bytes of memory obtained from the operating system for |
Gauge |
|
Integer (no units) |
|
The number of bytes of memory in profiling bucket hash tables. |
Gauge |
|
Integer (no units) |
|
The number of bytes of memory in garbage collection metadata. |
Gauge |
|
Integer (no units) |
|
The number of bytes of memory in miscellaneous, off-heap runtime allocations. |
Gauge |
|
Integer (no units) |
|
The target heap size of the next garbage collection cycle. |
Gauge |
|
Integer (no units) |
|
The time that the last garbage collection was completed in Epoch or Unix time. |
Gauge |
|
Nanoseconds |
|
The cumulative time in garbage collection stop-the-world pauses since the program started. |
Gauge |
|
Nanoseconds |
|
The number of completed garbage collection cycles. |
Gauge |
|
Integer (no units) |
|
The number of garbage collection cycles that were forced due to an application calling the garbage collection function. |
Gauge |
|
Integer (no units) |
|
The fraction of the available CPU time of the program that has been used by the garbage collector since the program started. |
Gauge |
|
Integer (no units) |