×

After deploying an installer-provisioned OpenShift Container Platform cluster, you can use the following procedures to expand the number of worker nodes. Ensure that each prospective worker node meets the prerequisites.

Expanding the cluster using RedFish Virtual Media involves meeting minimum firmware requirements. See Firmware requirements for installing with virtual media in the Prerequisites section for additional details when expanding the cluster using RedFish Virtual Media.

Preparing the bare metal node

Expanding the cluster requires a DHCP server. Each node must have a DHCP reservation.

Reserving IP addresses so they become static IP addresses

Some administrators prefer to use static IP addresses so that each node’s IP address remains constant in the absence of a DHCP server. To use static IP addresses in the OpenShift Container Platform cluster, reserve the IP addresses in the DHCP server with an infinite lease. After the installer provisions the node successfully, the dispatcher script will check the node’s network configuration. If the dispatcher script finds that the network configuration contains a DHCP infinite lease, it will recreate the connection as a static IP connection using the IP address from the DHCP infinite lease. NICs without DHCP infinite leases will remain unmodified.

Setting IP addresses with an infinite lease is incompatible with network configuration deployed by using the Machine Config Operator.

Preparing the bare metal node requires executing the following procedure from the provisioner node.

Procedure
  1. Get the oc binary, if needed. It should already exist on the provisioner node.

    [kni@provisioner ~]$ curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/$VERSION/openshift-client-linux-$VERSION.tar.gz | tar zxvf - oc
    [kni@provisioner ~]$ sudo cp oc /usr/local/bin
  2. Power off the bare metal node via the baseboard management controller and ensure it is off.

  3. Retrieve the user name and password of the bare metal node’s baseboard management controller. Then, create base64 strings from the user name and password. In the following example, the user name is root and the password is calvin.

    [kni@provisioner ~]$ echo -ne "root" | base64
    [kni@provisioner ~]$ echo -ne "calvin" | base64
  4. Create a configuration file for the bare metal node.

    [kni@provisioner ~]$ vim bmh.yaml
    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: openshift-worker-<num>-bmc-secret
    type: Opaque
    data:
      username: <base64-of-uid>
      password: <base64-of-pwd>
    ---
    apiVersion: metal3.io/v1alpha1
    kind: BareMetalHost
    metadata:
      name: openshift-worker-<num>
    spec:
      online: true
      bootMACAddress: <NIC1-mac-address>
      bmc:
        address: <protocol>://<bmc-ip>
        credentialsName: openshift-worker-<num>-bmc-secret

    Replace <num> for the worker number of the bare metal node in the two name fields and the credentialsName field. Replace <base64-of-uid> with the base64 string of the user name. Replace <base64-of-pwd> with the base64 string of the password. Replace <NIC1-mac-address> with the MAC address of the bare metal node’s first NIC.

    Refer to the BMC addressing section for additional BMC configuration options. Replace <protocol> with the BMC protocol, such as IPMI, RedFish, or others. Replace <bmc-ip> with the IP address of the bare metal node’s baseboard management controller.

    If the MAC address of an existing bare metal node matches the MAC address of a bare metal host that you are attempting to provision, then the Ironic installation will fail. If the host enrollment, inspection, cleaning, or other Ironic steps fail, the Bare Metal Operator retries the installation continuously. See Diagnosing a host duplicate MAC address for more information.

  5. Create the bare metal node.

    [kni@provisioner ~]$ oc -n openshift-machine-api create -f bmh.yaml
    secret/openshift-worker-<num>-bmc-secret created
    baremetalhost.metal3.io/openshift-worker-<num> created

    Where <num> will be the worker number.

  6. Power up and inspect the bare metal node.

    [kni@provisioner ~]$ oc -n openshift-machine-api get bmh openshift-worker-<num>

    Where <num> is the worker node number.

    NAME                 STATUS   PROVISIONING STATUS   CONSUMER   BMC                 HARDWARE PROFILE   ONLINE   ERROR
    openshift-worker-<num>   OK       ready                            ipmi://<out-of-band-ip>   unknown            true

Diagnosing a duplicate MAC address when provisioning a new host in the cluster

If the MAC address of an existing bare-metal node in the cluster matches the MAC address of a bare-metal host you are attempting to add to the cluster, the Bare Metal Operator associates the host with the existing node. If the host enrollment, inspection, cleaning, or other Ironic steps fail, the Bare Metal Operator retries the installation continuously. A registration error is displayed for the failed bare-metal host.

You can diagnose a duplicate MAC address by examining the bare-metal hosts that are running in the openshift-machine-api namespace.

Prerequisites
  • Install an OpenShift Container Platform cluster on bare metal.

  • Install the OpenShift Container Platform CLI oc.

  • Log in as a user with cluster-admin privileges.

Procedure

To determine whether a bare-metal host that fails provisioning has the same MAC address as an existing node, do the following:

  1. Get the bare-metal hosts running in the openshift-machine-api namespace:

    $ oc get bmh -n openshift-machine-api
    Example output
    NAME                 STATUS   PROVISIONING STATUS      CONSUMER
    openshift-master-0   OK       externally provisioned   openshift-zpwpq-master-0
    openshift-master-1   OK       externally provisioned   openshift-zpwpq-master-1
    openshift-master-2   OK       externally provisioned   openshift-zpwpq-master-2
    openshift-worker-0   OK       provisioned              openshift-zpwpq-worker-0-lv84n
    openshift-worker-1   OK       provisioned              openshift-zpwpq-worker-0-zd8lm
    openshift-worker-2   error    registering
  2. To see more detailed information about the status of the failing host, run the following command replacing <bare_metal_host_name> with the name of the host:

    $ oc get -n openshift-machine-api bmh <bare_metal_host_name> -o yaml
    Example output
    ...
    status:
      errorCount: 12
      errorMessage: MAC address b4:96:91:1d:7c:20 conflicts with existing node openshift-worker-1
      errorType: registration error
    ...

Provisioning the bare metal node

Provisioning the bare metal node requires executing the following procedure from the provisioner node.

Procedure
  1. Ensure the PROVISIONING STATUS is ready before provisioning the bare metal node.

    $  oc -n openshift-machine-api get bmh openshift-worker-<num>

    Where <num> is the worker node number.

    NAME                 STATUS   PROVISIONING STATUS   CONSUMER   BMC                 HARDWARE PROFILE   ONLINE   ERROR
    openshift-worker-<num>   OK       ready                            ipmi://<out-of-band-ip>   unknown            true
  2. Get a count of the number of worker nodes.

    $ oc get nodes
    NAME                                                STATUS   ROLES           AGE     VERSION
    provisioner.openshift.example.com            Ready    master          30h     v1.16.2
    openshift-master-1.openshift.example.com            Ready    master          30h     v1.16.2
    openshift-master-2.openshift.example.com            Ready    master          30h     v1.16.2
    openshift-master-3.openshift.example.com            Ready    master          30h     v1.16.2
    openshift-worker-0.openshift.example.com            Ready    master          30h     v1.16.2
    openshift-worker-1.openshift.example.com            Ready    master          30h     v1.16.2
  3. Get the machine set.

    $ oc get machinesets -n openshift-machine-api
    NAME                                DESIRED   CURRENT   READY   AVAILABLE   AGE
    ...
    openshift-worker-0.example.com      1         1         1       1           55m
    openshift-worker-1.example.com      1         1         1       1           55m
  4. Increase the number of worker nodes by one.

    $ oc scale --replicas=<num> machineset <machineset> -n openshift-machine-api

    Replace <num> with the new number of worker nodes. Replace <machineset> with the name of the machine set from the previous step.

  5. Check the status of the bare metal node.

    $ oc -n openshift-machine-api get bmh openshift-worker-<num>

    Where <num> is the worker node number. The status changes from ready to provisioning.

    NAME                 STATUS   PROVISIONING STATUS   CONSUMER                  BMC                 HARDWARE PROFILE   ONLINE   ERROR
    openshift-worker-<num>   OK       provisioning          openshift-worker-<num>-65tjz   ipmi://<out-of-band-ip>   unknown            true

    The provisioning status remains until the OpenShift Container Platform cluster provisions the node. This can take 30 minutes or more. Once complete, the status will change to provisioned.

    NAME                 STATUS   PROVISIONING STATUS   CONSUMER                  BMC                 HARDWARE PROFILE   ONLINE   ERROR
    openshift-worker-<num>   OK       provisioned           openshift-worker-<num>-65tjz   ipmi://<out-of-band-ip>   unknown            true
  6. Once provisioned, ensure the bare metal node is ready.

    $ oc get nodes
    NAME                                          STATUS   ROLES   AGE     VERSION
    provisioner.openshift.example.com             Ready    master  30h     v1.16.2
    openshift-master-1.openshift.example.com      Ready    master  30h     v1.16.2
    openshift-master-2.openshift.example.com      Ready    master  30h     v1.16.2
    openshift-master-3.openshift.example.com      Ready    master  30h     v1.16.2
    openshift-worker-0.openshift.example.com      Ready    master  30h     v1.16.2
    openshift-worker-1.openshift.example.com      Ready    master  30h     v1.16.2
    openshift-worker-<num>.openshift.example.com  Ready    worker  3m27s   v1.16.2

    You can also check the kubelet.

    $ ssh openshift-worker-<num>
    [kni@openshift-worker-<num>]$ journalctl -fu kubelet