×

If your OpenShift Pipelines installation runs a large number of tasks at the same time, its performance might degrade. You might experience slowdowns and failed pipeline runs.

For reference, in Red Hat tests, on a three-node OpenShift Container Platform cluster running on Amazon Web Services (AWS) m6a.2xlarge nodes, up to 60 simple test pipelines ran concurrently without significant failures or delays. If more pipelines ran concurrently, the number of failed pipeline runs, the average duration of a pipeline run, the pod creation latency, the work queue depth, and the number of pending pods increased. This testing was performed on Red Hat OpenShift Pipelines version 1.13; no statistically significant difference was observed from version 1.12.

These results depend on the test configuration. Performance results with your configuration can be different.

Improving OpenShift Pipelines performance

If you experience slowness or recurrent failures of pipeline runs, you can take any of the following steps to improve the performance of OpenShift Pipelines.

  • Monitor the resource usage of the nodes in the OpenShift Container Platform cluster on which OpenShift Pipelines runs. If the resource usage is high, increase the number of nodes.

  • Enable high-availability mode. This mode affects the controller that creates and starts pods for task runs and pipeline runs. In Red Hat testing, high-availability mode significantly reduced pipeline execution times as well as the delay from creating a TaskRun resource CR to the start of the pod executing the task run. To enable high-availability mode, make the following changes in the TektonConfig custom resource (CR):

    • Set the pipeline.performance.disable-ha spec to false.

    • Set the pipeline.performance.buckets spec to a number between 5 and 10.

    • Set the pipeline.performance.replicas spec to a number higher than 2 and lower than or equal to the pipeline.performance.buckets setting.

      You can try different numbers for buckets and replicas to observe the effect on performance. In general, higher numbers are beneficial. Monitor for exhausting the resources of the nodes, including CPU and memory utilization.