×

This document describes how to manage compute (also known as worker) nodes with Red Hat OpenShift Service on AWS (ROSA).

The majority of changes for compute nodes are configured on machine pools. A machine pool is a group of compute nodes in a cluster that have the same configuration, providing ease of management.

You can edit machine pool configuration options such as scaling, adding node labels, and adding taints.

Creating a machine pool

A machine pool is created when you install a Red Hat OpenShift Service on AWS (ROSA) cluster. After installation, you can create additional machine pools for your cluster by using OpenShift Cluster Manager or the ROSA CLI (rosa).

For users of ROSA CLI rosa version 1.2.25 and earlier versions, the machine pool created along with the cluster is identified as Default. For users of ROSA CLI rosa version 1.2.26 and later, the machine pool created along with the cluster is identified as worker.

Creating a machine pool using OpenShift Cluster Manager

You can create additional machine pools for your Red Hat OpenShift Service on AWS (ROSA) cluster by using OpenShift Cluster Manager.

Prerequisites
  • You created a ROSA cluster.

Procedure
  1. Navigate to OpenShift Cluster Manager and select your cluster.

  2. Under the Machine pools tab, click Add machine pool.

  3. Add a Machine pool name.

  4. Select a Compute node instance type from the drop-down menu. The instance type defines the vCPU and memory allocation for each compute node in the machine pool.

    You cannot change the instance type for a machine pool after the pool is created.

  5. Optional: Configure autoscaling for the machine pool:

    1. Select Enable autoscaling to automatically scale the number of machines in your machine pool to meet the deployment needs.

    2. Set the minimum and maximum node count limits for autoscaling. The cluster autoscaler does not reduce or increase the machine pool node count beyond the limits that you specify.

      • If you deployed your cluster using a single availability zone, set the Minimum and maximum node count. This defines the minimum and maximum compute node limits in the availability zone.

      • If you deployed your cluster using multiple availability zones, set the Minimum nodes per zone and Maximum nodes per zone. This defines the minimum and maximum compute node limits per zone.

        Alternatively, you can set your autoscaling preferences for the machine pool after the machine pool is created.

  6. If you did not enable autoscaling, select a compute node count:

    • If you deployed your cluster using a single availability zone, select a Compute node count from the drop-down menu. This defines the number of compute nodes to provision to the machine pool for the zone.

    • If you deployed your cluster using multiple availability zones, select a Compute node count (per zone) from the drop-down menu. This defines the number of compute nodes to provision to the machine pool per zone.

  7. Optional: Configure Root disk size.

  8. Optional: Add node labels and taints for your machine pool:

    1. Expand the Edit node labels and taints menu.

    2. Under Node labels, add Key and Value entries for your node labels.

    3. Under Taints, add Key and Value entries for your taints.

      Creating a machine pool with taints is only possible if the cluster already has at least one machine pool without a taint.

    4. For each taint, select an Effect from the drop-down menu. Available options include NoSchedule, PreferNoSchedule, and NoExecute.

      Alternatively, you can add the node labels and taints after you create the machine pool.

  9. Optional: Select additional custom security groups to use for nodes in this machine pool. You must have already created the security groups and associated them with the VPC that you selected for this cluster. You cannot add or edit security groups after you create the machine pool. For more information, see the requirements for security groups in the "Additional resources" section.

    You can use up to ten additional security groups for machine pools on ROSA with HCP clusters.

  10. Optional: Use Amazon EC2 Spot Instances if you want to configure your machine pool to deploy machines as non-guaranteed AWS Spot Instances:

    1. Select Use Amazon EC2 Spot Instances.

    2. Leave Use On-Demand instance price selected to use the on-demand instance price. Alternatively, select Set maximum price to define a maximum hourly price for a Spot Instance.

      For more information about Amazon EC2 Spot Instances, see the AWS documentation.

      Your Amazon EC2 Spot Instances might be interrupted at any time. Use Amazon EC2 Spot Instances only for workloads that can tolerate interruptions.

      If you select Use Amazon EC2 Spot Instances for a machine pool, you cannot disable the option after the machine pool is created.

  11. Click Add machine pool to create the machine pool.

Verification
  • Verify that the machine pool is visible on the Machine pools page and the configuration is as expected.

Additional resources

Creating a machine pool using the ROSA CLI

You can create additional machine pools for your Red Hat OpenShift Service on AWS (ROSA) cluster by using the ROSA CLI (rosa).

Prerequisites
  • You installed and configured the latest Red Hat OpenShift Service on AWS (ROSA) CLI, rosa, on your workstation.

  • You logged in to your Red Hat account using the ROSA CLI (rosa).

  • You created a ROSA cluster.

Procedure
  • To add a machine pool that does not use autoscaling, create the machine pool and define the instance type, compute (also known as worker) node count, and node labels:

    $ rosa create machinepool --cluster=<cluster-name> \
                              --name=<machine_pool_id> \ (1)
                              --replicas=<replica_count> \ (2)
                              --instance-type=<instance_type> \ (3)
                              --labels=<key>=<value>,<key>=<value> \ (4)
                              --taints=<key>=<value>:<effect>,<key>=<value>:<effect> \ (5)
                              --use-spot-instances \ (6)
                              --spot-max-price=0.5 \ (7)
                              --disk-size=<disk_size> (8)
                              --availability-zone=<availability_zone_name> (9)
                              --additional-security-group-ids <sec_group_id> (10)
                              --subnet string (11)
    
    1 Specifies the name of the machine pool. Replace <machine_pool_id> with the name of your machine pool.
    2 Specifies the number of compute nodes to provision. If you deployed ROSA using a single availability zone, this defines the number of compute nodes to provision to the machine pool for the zone. If you deployed your cluster using multiple availability zones, this defines the number of compute nodes to provision in total across all zones and the count must be a multiple of 3. The --replicas argument is required when autoscaling is not configured.
    3 Optional: Sets the instance type for the compute nodes in your machine pool. The instance type defines the vCPU and memory allocation for each compute node in the pool. Replace <instance_type> with an instance type. The default is m5.xlarge. You cannot change the instance type for a machine pool after the pool is created.
    4 Optional: Defines the labels for the machine pool. Replace <key>=<value>,<key>=<value> with a comma-delimited list of key-value pairs, for example --labels=key1=value1,key2=value2.
    5 Optional: Defines the taints for the machine pool. Replace <key>=<value>:<effect>,<key>=<value>:<effect> with a key, value, and effect for each taint, for example --taints=key1=value1:NoSchedule,key2=value2:NoExecute. Available effects include NoSchedule, PreferNoSchedule, and NoExecute.
    6 Optional: Configures your machine pool to deploy machines as non-guaranteed AWS Spot Instances. For information, see Amazon EC2 Spot Instances in the AWS documentation. If you select Use Amazon EC2 Spot Instances for a machine pool, you cannot disable the option after the machine pool is created.
    7 Optional: If you choose to use Spot Instances, you can specify this argument to define a maximum hourly price for a Spot Instance. If this argument is not specified, the on-demand price is used.

    Your Amazon EC2 Spot Instances might be interrupted at any time. Use Amazon EC2 Spot Instances only for workloads that can tolerate interruptions.

    8 Optional: Specifies the worker node disk size. The value can be in GB, GiB, TB, or TiB. Replace <disk_size> with a numeric value and unit, for example --disk-size=200GiB.
    9 Optional: For Multi-AZ clusters, you can create a machine pool in a Single-AZ of your choice. Replace <az> with a Single-AZ name.

    Multi-AZ clusters retain a Multi-AZ control plane and can have worker machine pools across a Single-AZ or Multi-AZ. Machine pools distribute machines (nodes) evenly across availability zones.

    If you choose a worker machine pool with a Single-AZ, there is no fault tolerance for that machine pool, regardless of machine replica count. For fault-tolerant worker machine pools, choosing a Multi-AZ machine pool distributes machines in multiples of 3 across availability zones.

    • A Multi-AZ machine pool with three availability zones can have a machine count in multiples of 3 only, such as 3, 6, 9, and so on.

    • A Single-AZ machine pool with one availability zone can have a machine count in multiples of 1, such as 1,2,3,4 and so on.

    10 Optional: For machine pools in clusters that do not have Red Hat managed VPCs, you can select additional custom security groups to use in your machine pools. You must have already created the security groups and associated them with the VPC that you selected for this cluster. You cannot add or edit security groups after you create the machine pool. For more information, see the requirements for security groups in the "Additional resources" section.

    You can use up to ten additional security groups for machine pools on ROSA with HCP clusters.

    11 Optional: For BYO VPC clusters, you can select a subnet to create a Single-AZ machine pool. If the subnet is out of your cluster creation subnets, there must be a tag with a key kubernetes.io/cluster/<infra-id> and value shared. Customers can obtain the Infra ID by using the following command:
    $ rosa describe cluster -c <cluster name>|grep "Infra ID:"
    Example output
    Infra ID:                   mycluster-xqvj7

    You cannot set both --subnet and --availability-zone at the same time, only 1 is allowed for a Single-AZ machine pool creation.

    The following example creates a machine pool called mymachinepool that uses the m5.xlarge instance type and has 2 compute node replicas. The example also adds 2 workload-specific labels:

    $ rosa create machinepool --cluster=mycluster --name=mymachinepool --replicas=2 --instance-type=m5.xlarge --labels=app=db,tier=backend
    Example output
    I: Machine pool 'mymachinepool' created successfully on cluster 'mycluster'
    I: To view all machine pools, run 'rosa list machinepools -c mycluster'
  • To add a machine pool that uses autoscaling, create the machine pool and define the autoscaling configuration, instance type and node labels:

    $ rosa create machinepool --cluster=<cluster-name> \
                              --name=<machine_pool_id> \ (1)
                              --enable-autoscaling \ (2)
                              --min-replicas=<minimum_replica_count> \ (3)
                              --max-replicas=<maximum_replica_count> \ (3)
                              --instance-type=<instance_type> \ (4)
                              --labels=<key>=<value>,<key>=<value> \ (5)
                              --taints=<key>=<value>:<effect>,<key>=<value>:<effect> \ (6)
                              --use-spot-instances \ (7)
                              --spot-max-price=0.5 (8)
                              --availability-zone=<availability_zone_name> (9)
    
    1 Specifies the name of the machine pool. Replace <machine_pool_id> with the name of your machine pool.
    2 Enables autoscaling in the machine pool to meet the deployment needs.
    3 Defines the minimum and maximum compute node limits. The cluster autoscaler does not reduce or increase the machine pool node count beyond the limits that you specify. If you deployed ROSA using a single availability zone, the --min-replicas and --max-replicas arguments define the autoscaling limits in the machine pool for the zone. If you deployed your cluster using multiple availability zones, the arguments define the autoscaling limits in total across all zones and the counts must be multiples of 3.
    4 Optional: Sets the instance type for the compute nodes in your machine pool. The instance type defines the vCPU and memory allocation for each compute node in the pool. Replace <instance_type> with an instance type. The default is m5.xlarge. You cannot change the instance type for a machine pool after the pool is created.
    5 Optional: Defines the labels for the machine pool. Replace <key>=<value>,<key>=<value> with a comma-delimited list of key-value pairs, for example --labels=key1=value1,key2=value2.
    6 Optional: Defines the taints for the machine pool. Replace <key>=<value>:<effect>,<key>=<value>:<effect> with a key, value, and effect for each taint, for example --taints=key1=value1:NoSchedule,key2=value2:NoExecute. Available effects include NoSchedule, PreferNoSchedule, and NoExecute.
    7 Optional: Configures your machine pool to deploy machines as non-guaranteed AWS Spot Instances. For information, see Amazon EC2 Spot Instances in the AWS documentation. If you select Use Amazon EC2 Spot Instances for a machine pool, you cannot disable the option after the machine pool is created.

    Your Amazon EC2 Spot Instances might be interrupted at any time. Use Amazon EC2 Spot Instances only for workloads that can tolerate interruptions.

    8 Optional: If you choose to use Spot Instances, you can specify this argument to define a maximum hourly price for a Spot Instance. If this argument is not specified, the on-demand price is used.
    9 Optional: For Multi-AZ clusters, you can create a machine pool in a Single-AZ of your choice. Replace <az> with a Single-AZ name.

    The following example creates a machine pool called mymachinepool that uses the m5.xlarge instance type and has autoscaling enabled. The minimum compute node limit is 3 and the maximum is 6 overall. The example also adds 2 workload-specific labels:

    $ rosa create machinepool --cluster=mycluster --name=mymachinepool --enable-autoscaling --min-replicas=3 --max-replicas=6 --instance-type=m5.xlarge --labels=app=db,tier=backend
    Example output
    I: Machine pool 'mymachinepool' created successfully on cluster 'mycluster'
    I: To view all machine pools, run 'rosa list machinepools -c mycluster'
Verification

You can list all machine pools on your cluster or describe individual machine pools.

  1. List the available machine pools on your cluster:

    $ rosa list machinepools --cluster=<cluster_name>
    Example output
    ID             AUTOSCALING  REPLICAS  INSTANCE TYPE  LABELS                  TAINTS    AVAILABILITY ZONES                    SPOT INSTANCES
    Default        No           3         m5.xlarge                                        us-east-1a, us-east-1b, us-east-1c    N/A
    mymachinepool  Yes          3-6       m5.xlarge      app=db, tier=backend              us-east-1a, us-east-1b, us-east-1c    No
  2. Describe the information of a specific machine pool in your cluster:

    $ rosa describe machinepool --cluster=<cluster_name> mymachinepool
    Example output
    ID:                         mymachinepool
    Cluster ID:                 27iimopsg1mge0m81l0sqivkne2qu6dr
    Autoscaling:                Yes
    Replicas:                   3-6
    Instance type:              m5.xlarge
    Labels:                     app=db, tier=backend
    Taints:
    Availability zones:         us-east-1a, us-east-1b, us-east-1c
    Subnets:
    Spot instances:             No
    Disk size:                  300 GiB
    Security Group IDs:
  3. Verify that the machine pool is included in the output and the configuration is as expected.

Additional resources

Configuring machine pool disk volume

Machine pool disk volume size can be configured for additional flexibility. The default disk size is 300 GiB. For cluster version 4.13 or earlier, the disk size can be configured to a minimum of 128 GiB to a maximum of 1 TiB. For cluster version 4.14 and later, the disk size can be configured to a minimum of 128 GiB to a maximum of 16 TiB.

You can configure the machine pool disk size for your cluster by using OpenShift Cluster Manager or the ROSA CLI (rosa).

Existing cluster and machine pool node volumes cannot be resized.

The default disk size is 300 GiB. For cluster version 4.13 or earlier, the disk size can be configured to a minimum of 128 GiB to a maximum of 1 TiB. For cluster version 4.14 and later, the disk size can be configured to a minimum of 128 GiB to a maximum of 16 TiB.

Configuring machine pool disk volume using OpenShift Cluster Manager

Prerequisite for cluster creation
  • You have the option to select the node disk sizing for the default machine pool during cluster installation.

Procedure for cluster creation
  1. From ROSA cluster wizard, navigate to Cluster settings.

  2. Navigate to Machine pool step.

  3. Select the desired Root disk size.

  4. Select Next to continue creating your cluster.

Prerequisite for machine pool creation
  • You have the option to select the node disk sizing for the new machine pool after the cluster has been installed.

Procedure for machine pool creation
  1. Navigate to OpenShift Cluster Manager and select your cluster.

  2. Navigate to Machine pool tab.

  3. Click Add machine pool.

  4. Select the desired Root disk size.

  5. Select Add machine pool to create the machine pool.

Configuring machine pool disk volume using the ROSA CLI

Prerequisite for cluster creation
  • You have the option to select the root disk sizing for the default machine pool during cluster installation.

Procedure for cluster creation
  • Run the following command when creating your OpenShift cluster for the desired root disk size:

    $ rosa create cluster --worker-disk-size=<disk_size>

    The value can be in GB, GiB, TB, or TiB. Replace '<disk_size>' with a numeric value and unit, for example '--worker-disk-size=200GiB'. You cannot separate the digit and the unit. No spaces are allowed.

Prerequisite for machine pool creation
  • You have the option to select the root disk sizing for the new machine pool after the cluster has been installed.

Procedure for machine pool creation
  1. Scale up the cluster by executing the following command:

    $ rosa create machinepool --cluster=<cluster_id>  (1)
                              --disk-size=<disk_size>  (2)
    
    1 Specifies the ID or name of your existing OpenShift cluster
    2 Specifies the worker node disk size. The value can be in GB, GiB, TB, or TiB. Replace '<disk_size>' with a numeric value and unit, for example '--disk-size=200GiB'. You cannot separate the digit and the unit. No spaces are allowed.
  2. Confirm new machine pool disk volume size by logging into the AWS console and find the EC2 virtual machine root volume size.

Additional resources

Deleting a machine pool

You can delete a machine pool in the event that your workload requirements have changed and your current machine pools no longer meet your needs.

You can delete machine pools using the OpenShift Cluster Manager or the ROSA CLI (rosa).

Deleting a machine pool using OpenShift Cluster Manager

You can delete a machine pool for your Red Hat OpenShift Service on AWS (ROSA) cluster by using OpenShift Cluster Manager.

Prerequisites
  • You created a ROSA cluster.

  • The cluster is in the ready state.

  • You have an existing machine pool without any taints and with at least two instances for a single-AZ cluster or three instances for a multi-AZ cluster.

Procedure
  1. From OpenShift Cluster Manager, navigate to the Clusters page and select the cluster that contains the machine pool that you want to delete.

  2. On the selected cluster, select the Machine pools tab.

  3. Under the Machine pools tab, click the options menu kebab for the machine pool that you want to delete.

  4. Click Delete.

The selected machine pool is deleted.

Deleting a machine pool using the ROSA CLI

You can delete a machine pool for your Red Hat OpenShift Service on AWS (ROSA) cluster by using the ROSA CLI.

For users of ROSA CLI rosa version 1.2.25 and earlier versions, the machine pool (ID='Default') that is created along with the cluster cannot be deleted. For users of ROSA CLI rosa version 1.2.26 and later, the machine pool (ID='worker') that is created along with the cluster can be deleted as long as there is one machine pool within the cluster that contains no taints, and at least two replicas for a Single-AZ cluster or three replicas for a Multi-AZ cluster.

Prerequisites
  • You created a ROSA cluster.

  • The cluster is in the ready state.

  • You have an existing machine pool without any taints and with at least two instances for a Single-AZ cluster or three instances for a Multi-AZ cluster.

Procedure
  1. From the ROSA CLI, run the following command:

    $ rosa delete machinepool -c=<cluster_name> <machine_pool_ID>
    Example output
    ? Are you sure you want to delete machine pool <machine_pool_ID> on cluster <cluster_name>? (y/N)
  2. Enter 'y' to delete the machine pool.

    The selected machine pool is deleted.

Scaling compute nodes manually

If you have not enabled autoscaling for your machine pool, you can manually scale the number of compute (also known as worker) nodes in the pool to meet your deployment needs.

You must scale each machine pool separately.

Prerequisites
  • You installed and configured the latest Red Hat OpenShift Service on AWS (ROSA) CLI, rosa, on your workstation.

  • You logged in to your Red Hat account using the ROSA CLI (rosa).

  • You created a Red Hat OpenShift Service on AWS (ROSA) cluster.

  • You have an existing machine pool.

Procedure
  1. List the machine pools in the cluster:

    $ rosa list machinepools --cluster=<cluster_name>
    Example output
    ID        AUTOSCALING   REPLICAS    INSTANCE TYPE  LABELS    TAINTS   AVAILABILITY ZONES    DISK SIZE   SG IDs
    default   No            2           m5.xlarge                         us-east-1a            300GiB      sg-0e375ff0ec4a6cfa2
    mp1       No            2           m5.xlarge                         us-east-1a            300GiB      sg-0e375ff0ec4a6cfa2
  2. Increase or decrease the number of compute node replicas in a machine pool:

    $ rosa edit machinepool --cluster=<cluster_name> \
                            --replicas=<replica_count> \ (1)
                            <machine_pool_id> (2)
    
    1 If you deployed Red Hat OpenShift Service on AWS (ROSA) using a single availability zone, the replica count defines the number of compute nodes to provision to the machine pool for the zone. If you deployed your cluster using multiple availability zones, the count defines the total number of compute nodes in the machine pool across all zones and must be a multiple of 3.
    2 Replace <machine_pool_id> with the ID of your machine pool, as listed in the output of the preceding command.
Verification
  1. List the available machine pools in your cluster:

    $ rosa list machinepools --cluster=<cluster_name>
    Example output
    ID        AUTOSCALING   REPLICAS    INSTANCE TYPE  LABELS    TAINTS   AVAILABILITY ZONES    DISK SIZE   SG IDs
    default   No            2           m5.xlarge                         us-east-1a            300GiB      sg-0e375ff0ec4a6cfa2
    mp1       No            3           m5.xlarge                         us-east-1a            300GiB      sg-0e375ff0ec4a6cfa2
  2. In the output of the preceding command, verify that the compute node replica count is as expected for your machine pool. In the example output, the compute node replica count for the mp1 machine pool is scaled to 3.

Node labels

A label is a key-value pair applied to a Node object. You can use labels to organize sets of objects and control the scheduling of pods.

You can add labels during cluster creation or after. Labels can be modified or updated at any time.

Additional resources

Adding node labels to a machine pool

Add or edit labels for compute (also known as worker) nodes at any time to manage the nodes in a manner that is relevant to you. For example, you can assign types of workloads to specific nodes.

Labels are assigned as key-value pairs. Each key must be unique to the object it is assigned to.

Prerequisites
  • You installed and configured the latest Red Hat OpenShift Service on AWS (ROSA) CLI, rosa, on your workstation.

  • You logged in to your Red Hat account using the ROSA CLI (rosa).

  • You created a Red Hat OpenShift Service on AWS (ROSA) cluster.

  • You have an existing machine pool.

Procedure
  1. List the machine pools in the cluster:

    $ rosa list machinepools --cluster=<cluster_name>
    Example output
    ID           AUTOSCALING  REPLICAS  INSTANCE TYPE  LABELS    TAINTS    AVAILABILITY ZONES    SPOT INSTANCES
    Default      No           2         m5.xlarge                          us-east-1a            N/A
    db-nodes-mp  No           2         m5.xlarge                          us-east-1a            No
  2. Add or update the node labels for a machine pool:

    • To add or update node labels for a machine pool that does not use autoscaling, run the following command:

      $ rosa edit machinepool --cluster=<cluster_name> \
                              --replicas=<replica_count> \ (1)
                              --labels=<key>=<value>,<key>=<value> \ (2)
                              <machine_pool_id>
      1 For machine pools that do not use autoscaling, you must provide a replica count when adding node labels. If you do not specify the --replicas argument, you are prompted for a replica count before the command completes. If you deployed Red Hat OpenShift Service on AWS (ROSA) using a single availability zone, the replica count defines the number of compute nodes to provision to the machine pool for the zone. If you deployed your cluster using multiple availability zones, the count defines the total number of compute nodes in the machine pool across all zones and must be a multiple of 3.
      2 Replace <key>=<value>,<key>=<value> with a comma-delimited list of key-value pairs, for example --labels=key1=value1,key2=value2. This list overwrites any modifications made to node labels on an ongoing basis.

      The following example adds labels to the db-nodes-mp machine pool:

      $ rosa edit machinepool --cluster=mycluster --replicas=2 --labels=app=db,tier=backend db-nodes-mp
      Example output
      I: Updated machine pool 'db-nodes-mp' on cluster 'mycluster'
    • To add or update node labels for a machine pool that uses autoscaling, run the following command:

      $ rosa edit machinepool --cluster=<cluster_name> \
                              --min-replicas=<minimum_replica_count> \ (1)
                              --max-replicas=<maximum_replica_count> \ (1)
                              --labels=<key>=<value>,<key>=<value> \ (2)
                              <machine_pool_id>
      1 For machine pools that use autoscaling, you must provide minimum and maximum compute node replica limits. If you do not specify the arguments, you are prompted for the values before the command completes. The cluster autoscaler does not reduce or increase the machine pool node count beyond the limits that you specify. If you deployed ROSA using a single availability zone, the --min-replicas and --max-replicas arguments define the autoscaling limits in the machine pool for the zone. If you deployed your cluster using multiple availability zones, the arguments define the autoscaling limits in total across all zones and the counts must be multiples of 3.
      2 Replace <key>=<value>,<key>=<value> with a comma-delimited list of key-value pairs, for example --labels=key1=value1,key2=value2. This list overwrites any modifications made to node labels on an ongoing basis.

      The following example adds labels to the db-nodes-mp machine pool:

      $ rosa edit machinepool --cluster=mycluster --min-replicas=2 --max-replicas=3 --labels=app=db,tier=backend db-nodes-mp
      Example output
      I: Updated machine pool 'db-nodes-mp' on cluster 'mycluster'
Verification
  1. Describe the details of the machine pool with the new labels:

    $ rosa describe machinepool --cluster=<cluster_name> <machine-pool-name>
    Example output
    ID:                         db-nodes-mp
    Cluster ID:                 <ID_of_cluster>
    Autoscaling:                No
    Replicas:                   2
    Instance type:              m5.xlarge
    Labels:                     app=db, tier=backend
    Taints:
    Availability zones:         us-east-1a
    Subnets:
    Spot instances:             No
    Disk size:                  300 GiB
    Security Group IDs:
  2. Verify that the labels are included for your machine pool in the output.

Adding taints to a machine pool

You can add taints for compute (also known as worker) nodes in a machine pool to control which pods are scheduled to them. When you apply a taint to a machine pool, the scheduler cannot place a pod on the nodes in the pool unless the pod specification includes a toleration for the taint. Taints can be added to a machine pool using the OpenShift Cluster Manager or the Red Hat OpenShift Service on AWS (ROSA) CLI, rosa.

A cluster must have at least one machine pool that does not contain any taints.

Adding taints to a machine pool using OpenShift Cluster Manager

You can add taints to a machine pool for your Red Hat OpenShift Service on AWS (ROSA) cluster by using OpenShift Cluster Manager.

Prerequisites
  • You created a Red Hat OpenShift Service on AWS (ROSA) cluster.

  • You have an existing machine pool that does not contain any taints and contains at least two instances.

Procedure
  1. Navigate to OpenShift Cluster Manager and select your cluster.

  2. Under the Machine pools tab, click the options menu kebab for the machine pool that you want to add a taint to.

  3. Select Edit taints.

  4. Add Key and Value entries for your taint.

  5. Select an Effect for your taint from the drop-down menu. Available options include NoSchedule, PreferNoSchedule, and NoExecute.

  6. Optional: Select Add taint if you want to add more taints to the machine pool.

  7. Click Save to apply the taints to the machine pool.

Verification
  1. Under the Machine pools tab, select > next to your machine pool to expand the view.

  2. Verify that your taints are listed under Taints in the expanded view.

Adding taints to a machine pool using the ROSA CLI

You can add taints to a machine pool for your Red Hat OpenShift Service on AWS (ROSA) cluster by using the ROSA CLI.

For users of ROSA CLI rosa version 1.2.25 and prior versions, the number of taints cannot be changed within the machine pool (ID=Default) created along with the cluster. For users of ROSA CLI rosa version 1.2.26 and beyond, the number of taints can be changed within the machine pool (ID=worker) created along with the cluster. There must be at least one machine pool without any taints and with at least two replicas for a Single-AZ cluster or three replicas for a Multi-AZ cluster.

Prerequisites
  • You installed and configured the latest AWS (aws), ROSA (rosa), and OpenShift (oc) CLIs on your workstation.

  • You logged in to your Red Hat account by using the rosa CLI.

  • You created a Red Hat OpenShift Service on AWS (ROSA) cluster.

  • You have an existing machine pool that does not contain any taints and contains at least two instances.

Procedure
  1. List the machine pools in the cluster by running the following command:

    $ rosa list machinepools --cluster=<cluster_name>
    Example output
    ID           AUTOSCALING  REPLICAS  INSTANCE TYPE  LABELS    TAINTS    AVAILABILITY ZONES    SPOT INSTANCES     DISK SIZE   SG IDs
    Default      No           2         m5.xlarge                          us-east-1a            N/A                300 GiB     sg-0e375ff0ec4a6cfa2
    db-nodes-mp  No           2         m5.xlarge                          us-east-1a            No                 300 GiB     sg-0e375ff0ec4a6cfa2
  2. Add or update the taints for a machine pool:

    • To add or update taints for a machine pool that does not use autoscaling, run the following command:

      $ rosa edit machinepool --cluster=<cluster_name> \
                              --replicas=<replica_count> \ (1)
                              --taints=<key>=<value>:<effect>,<key>=<value>:<effect> \ (2)
                              <machine_pool_id>
      1 For machine pools that do not use autoscaling, you must provide a replica count when adding taints. If you do not specify the --replicas argument, you are prompted for a replica count before the command completes. If you deployed Red Hat OpenShift Service on AWS (ROSA) using a single availability zone, the replica count defines the number of compute nodes to provision to the machine pool for the zone. If you deployed your cluster using multiple availability zones, the count defines the total number of compute nodes in the machine pool across all zones and must be a multiple of 3.
      2 Replace <key>=<value>:<effect>,<key>=<value>:<effect> with a key, value, and effect for each taint, for example --taints=key1=value1:NoSchedule,key2=value2:NoExecute. Available effects include NoSchedule, PreferNoSchedule, and NoExecute.This list overwrites any modifications made to node taints on an ongoing basis.

      The following example adds taints to the db-nodes-mp machine pool:

      $ rosa edit machinepool --cluster=mycluster --replicas 2 --taints=key1=value1:NoSchedule,key2=value2:NoExecute db-nodes-mp
      Example output
      I: Updated machine pool 'db-nodes-mp' on cluster 'mycluster'
    • To add or update taints for a machine pool that uses autoscaling, run the following command:

      $ rosa edit machinepool --cluster=<cluster_name> \
                              --min-replicas=<minimum_replica_count> \ (1)
                              --max-replicas=<maximum_replica_count> \ (1)
                              --taints=<key>=<value>:<effect>,<key>=<value>:<effect> \ (2)
                              <machine_pool_id>
      1 For machine pools that use autoscaling, you must provide minimum and maximum compute node replica limits. If you do not specify the arguments, you are prompted for the values before the command completes. The cluster autoscaler does not reduce or increase the machine pool node count beyond the limits that you specify. If you deployed ROSA using a single availability zone, the --min-replicas and --max-replicas arguments define the autoscaling limits in the machine pool for the zone. If you deployed your cluster using multiple availability zones, the arguments define the autoscaling limits in total across all zones and the counts must be multiples of 3.
      2 Replace <key>=<value>:<effect>,<key>=<value>:<effect> with a key, value, and effect for each taint, for example --taints=key1=value1:NoSchedule,key2=value2:NoExecute. Available effects include NoSchedule, PreferNoSchedule, and NoExecute. This list overwrites any modifications made to node taints on an ongoing basis.

      The following example adds taints to the db-nodes-mp machine pool:

      $ rosa edit machinepool --cluster=mycluster --min-replicas=2 --max-replicas=3 --taints=key1=value1:NoSchedule,key2=value2:NoExecute db-nodes-mp
      Example output
      I: Updated machine pool 'db-nodes-mp' on cluster 'mycluster'
Verification
  1. Describe the details of the machine pool with the new taints:

    $ rosa describe machinepool --cluster=<cluster_name> <machine-pool-name>
    Example output
    ID:                         db-nodes-mp
    Cluster ID:                 <ID_of_cluster>
    Autoscaling:                No
    Replicas:                   2
    Instance type:              m5.xlarge
    Labels:
    Taints:                     key1=value1:NoSchedule, key2=value2:NoExecute
    Availability zones:         us-east-1a
    Subnets:
    Spot instances:             No
    Disk size:                  300 GiB
    Security Group IDs:
  2. Verify that the taints are included for your machine pool in the output.

Adding node tuning to a machine pool

You can add tunings for compute, also called worker, nodes in a machine pool to control their configuration on Red Hat OpenShift Service on AWS (ROSA) with hosted control planes (HCP) clusters.

This feature is only supported on Red Hat OpenShift Service on AWS (ROSA) with hosted control planes (HCP) clusters.

Prerequisites
  • You installed and configured the latest Red Hat OpenShift Service on AWS (ROSA) CLI, rosa, on your workstation.

  • You logged in to your Red Hat account by using the ROSA CLI.

  • You created a Red Hat OpenShift Service on AWS (ROSA) with hosted control planes (HCP) cluster.

  • You have an existing machine pool.

  • You have an existing tuning configuration.

Procedure
  1. List all of the machine pools in the cluster:

    $ rosa list machinepools --cluster=<cluster_name>
    Example output
    ID          AUTOSCALING  REPLICAS  INSTANCE TYPE [...] AVAILABILITY ZONES  SUBNET  VERSION  AUTOREPAIR  TUNING CONFIGS
    workers      No           2         m5.xlarge    [...] us-east-1a          N/A     4.12.14  Yes
    db-nodes-mp  No           2         m5.xlarge    [...] us-east-1a          No      4.12.14  Yes
  2. You can add tuning configurations to an existing or new machine pool.

    1. Add tunings when creating a machine pool:

      $ rosa create machinepool -c <cluster-name> <machinepoolname> --tuning-configs <tuning_config_name>
      Example output
      ? Tuning configs: sample-tuning
      I: Machine pool 'db-nodes-mp' created successfully on hosted cluster 'sample-cluster'
      I: To view all machine pools, run 'rosa list machinepools -c sample-cluster'
    2. Add or update the tunings for a machine pool:

      $ rosa edit machinepool -c <cluster-name> <machinepoolname> --tuning-configs <tuningconfigname>
      Example output
      I: Updated machine pool 'db-nodes-mp' on cluster 'mycluster'
Verification
  1. List the available machine pools in your cluster:

    $ rosa list machinepools --cluster=<cluster_name>
    Example output
    ID          AUTOSCALING  REPLICAS  INSTANCE TYPE [...] AVAILABILITY ZONES  SUBNET  VERSION  AUTOREPAIR  TUNING CONFIGS
    workers      No           2         m5.xlarge    [...] us-east-1a          N/A     4.12.14  Yes
    db-nodes-mp  No           2         m5.xlarge    [...] us-east-1a          No      4.12.14  Yes          sample-tuning
  2. Verify that the tuning config is included for your machine pool in the output.