×

Single-node OpenShift clusters reduce the host prerequisites for deployment to a single host. This is useful for deployments in constrained environments or at the network edge. However, sometimes you need to add additional capacity to your cluster, for example, in telecommunications and network edge scenarios. In these scenarios, you can add worker nodes to the single-node cluster.

There are several ways that you can add worker nodes to a single-node cluster. You can add worker nodes to a cluster manually, using Red Hat OpenShift Cluster Manager, or by using the Assisted Installer REST API directly.

Adding worker nodes does not expand the cluster control plane, and it does not provide high availability to your cluster. For single-node OpenShift clusters, high availability is handled by failing over to another site. It is not recommended to add a large number of worker nodes to a single-node cluster.

Unlike multi-node clusters, by default all ingress traffic is routed to the single control-plane node, even after adding additional worker nodes.

Requirements for installing single-node OpenShift worker nodes

To install a single-node OpenShift worker node, you must address the following requirements:

  • Administration host: You must have a computer to prepare the ISO and to monitor the installation.

  • Production-grade server: Installing single-node OpenShift worker nodes requires a server with sufficient resources to run OpenShift Container Platform services and a production workload.

    Table 1. Minimum resource requirements
    Profile vCPU Memory Storage

    Minimum

    2 vCPU cores

    8GB of RAM

    100GB

    One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio:

    (threads per core × cores) × sockets = vCPUs

    The server must have a Baseboard Management Controller (BMC) when booting with virtual media.

  • Networking: The worker node server must have access to the internet or access to a local registry if it is not connected to a routable network. The worker node server must have a DHCP reservation or a static IP address and be able to access the single-node OpenShift cluster Kubernetes API, ingress route, and cluster node domain names. You must configure the DNS to resolve the IP address to each of the following fully qualified domain names (FQDN) for the single-node OpenShift cluster:

    Table 2. Required DNS records
    Usage FQDN Description

    Kubernetes API

    api.<cluster_name>.<base_domain>

    Add a DNS A/AAAA or CNAME record. This record must be resolvable by clients external to the cluster.

    Internal API

    api-int.<cluster_name>.<base_domain>

    Add a DNS A/AAAA or CNAME record when creating the ISO manually. This record must be resolvable by nodes within the cluster.

    Ingress route

    *.apps.<cluster_name>.<base_domain>

    Add a wildcard DNS A/AAAA or CNAME record that targets the node. This record must be resolvable by clients external to the cluster.

    Without persistent IP addresses, communications between the apiserver and etcd might fail.

Adding worker nodes using the Assisted Installer and OpenShift Cluster Manager

You can add worker nodes to single-node OpenShift clusters that were created on Red Hat OpenShift Cluster Manager using the Assisted Installer.

Adding worker nodes to single-node OpenShift clusters is only supported for clusters running OpenShift Container Platform version 4.11 and up.

Prerequisites
  • Have access to a single-node OpenShift cluster installed using Assisted Installer.

  • Install the OpenShift CLI (oc).

  • Log in as a user with cluster-admin privileges.

  • Ensure that all the required DNS records exist for the cluster that you are adding the worker node to.

Procedure
  1. Log in to OpenShift Cluster Manager and click the single-node cluster that you want to add a worker node to.

  2. Click Add hosts, and download the discovery ISO for the new worker node, adding SSH public key and configuring cluster-wide proxy settings as required.

  3. Boot the target host using the discovery ISO, and wait for the host to be discovered in the console. After the host is discovered, start the installation.

  4. As the installation proceeds, the installation generates pending certificate signing requests (CSRs) for the worker node. When prompted, approve the pending CSRs to complete the installation.

    When the worker node is sucessfully installed, it is listed as a worker node in the cluster web console.

Adding worker nodes using the Assisted Installer API

You can add worker nodes to single-node OpenShift clusters using the Assisted Installer REST API. Before you add worker nodes, you must log in to OpenShift Cluster Manager and authenticate against the API.

Authenticating against the Assisted Installer REST API

Before you can use the Assisted Installer REST API, you must authenticate against the API using a JSON web token (JWT) that you generate.

Prerequisites
Procedure
  1. Log in to OpenShift Cluster Manager and copy your API token.

  2. Set the $OFFLINE_TOKEN variable using the copied API token by running the following command:

    $ export OFFLINE_TOKEN=<copied_api_token>
  3. Set the $JWT_TOKEN variable using the previously set $OFFLINE_TOKEN variable:

    $ export JWT_TOKEN=$(
      curl \
      --silent \
      --header "Accept: application/json" \
      --header "Content-Type: application/x-www-form-urlencoded" \
      --data-urlencode "grant_type=refresh_token" \
      --data-urlencode "client_id=cloud-services" \
      --data-urlencode "refresh_token=${OFFLINE_TOKEN}" \
      "https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token" \
      | jq --raw-output ".access_token"
    )

    The JWT token is valid for 15 minutes only.

Verification
  • Optional: Check that you can access the API by running the following command:

    $ curl -s https://api.openshift.com/api/assisted-install/v2/component-versions -H "Authorization: Bearer ${JWT_TOKEN}" | jq
    Example output
    {
        "release_tag": "v2.5.1",
        "versions":
        {
            "assisted-installer": "registry.redhat.io/rhai-tech-preview/assisted-installer-rhel8:v1.0.0-175",
            "assisted-installer-controller": "registry.redhat.io/rhai-tech-preview/assisted-installer-reporter-rhel8:v1.0.0-223",
            "assisted-installer-service": "quay.io/app-sre/assisted-service:ac87f93",
            "discovery-agent": "registry.redhat.io/rhai-tech-preview/assisted-installer-agent-rhel8:v1.0.0-156"
        }
    }

Adding worker nodes using the Assisted Installer REST API

You can add worker nodes to clusters using the Assisted Installer REST API.

Prerequisites
  • Install the OpenShift Cluster Manager CLI (ocm).

  • Log in to OpenShift Cluster Manager as a user with cluster creation privileges.

  • Install jq.

  • Ensure that all the required DNS records exist for the cluster that you are adding the worker node to.

Procedure
  1. Authenticate against the Assisted Installer REST API and generate a JSON web token (JWT) for your session. The generated JWT token is valid for 15 minutes only.

  2. Set the $API_URL variable by running the following command:

    $ export API_URL=<api_url> (1)
    1 Replace <api_url> with the Assisted Installer API URL, for example, https://api.openshift.com
  3. Import the single-node OpenShift cluster by running the following commands:

    1. Set the $OPENSHIFT_CLUSTER_ID variable. Log in to the cluster and run the following command:

      $ export OPENSHIFT_CLUSTER_ID=$(oc get clusterversion -o jsonpath='{.items[].spec.clusterID}')
    2. Set the $CLUSTER_REQUEST variable that is used to import the cluster:

      $ export CLUSTER_REQUEST=$(jq --null-input --arg openshift_cluster_id "$OPENSHIFT_CLUSTER_ID" '{
        "api_vip_dnsname": "<api_vip>", (1)
        "openshift_cluster_id": $openshift_cluster_id,
        "name": "<openshift_cluster_name>" (2)
      }')
      1 Replace <api_vip> with the hostname for the cluster’s API server. This can be the DNS domain for the API server or the IP address of the single node which the worker node can reach. For example, api.compute-1.example.com.
      2 Replace <openshift_cluster_name> with the plain text name for the cluster. The cluster name should match the cluster name that was set during the Day 1 cluster installation.
    3. Import the cluster and set the $CLUSTER_ID variable. Run the following command:

      $ CLUSTER_ID=$(curl "$API_URL/api/assisted-install/v2/clusters/import" -H "Authorization: Bearer ${JWT_TOKEN}" -H 'accept: application/json' -H 'Content-Type: application/json' \
        -d "$CLUSTER_REQUEST" | tee /dev/stderr | jq -r '.id')
  4. Generate the InfraEnv resource for the cluster and set the $INFRA_ENV_ID variable by running the following commands:

    1. Download the pull secret file from Red Hat OpenShift Cluster Manager at console.redhat.com.

    2. Set the $INFRA_ENV_REQUEST variable:

      export INFRA_ENV_REQUEST=$(jq --null-input \
          --slurpfile pull_secret <path_to_pull_secret_file> \(1)
          --arg ssh_pub_key "$(cat <path_to_ssh_pub_key>)" \(2)
          --arg cluster_id "$CLUSTER_ID" '{
        "name": "<infraenv_name>", (3)
        "pull_secret": $pull_secret[0] | tojson,
        "cluster_id": $cluster_id,
        "ssh_authorized_key": $ssh_pub_key,
        "image_type": "<iso_image_type>" (4)
      }')
      1 Replace <path_to_pull_secret_file> with the path to the local file containing the downloaded pull secret from Red Hat OpenShift Cluster Manager at console.redhat.com.
      2 Replace <path_to_ssh_pub_key> with the path to the public SSH key required to access the host. If you do not set this value, you cannot access the host while in discovery mode.
      3 Replace <infraenv_name> with the plain text name for the InfraEnv resource.
      4 Replace <iso_image_type> with the ISO image type, either full-iso or minimal-iso.
    3. Post the $INFRA_ENV_REQUEST to the /v2/infra-envs API and set the $INFRA_ENV_ID variable:

      $ INFRA_ENV_ID=$(curl "$API_URL/api/assisted-install/v2/infra-envs" -H "Authorization: Bearer ${JWT_TOKEN}" -H 'accept: application/json' -H 'Content-Type: application/json' -d "$INFRA_ENV_REQUEST" | tee /dev/stderr | jq -r '.id')
  5. Get the URL of the discovery ISO for the cluster worker node by running the following command:

    $ curl -s "$API_URL/api/assisted-install/v2/infra-envs/$INFRA_ENV_ID" -H "Authorization: Bearer ${JWT_TOKEN}" | jq -r '.download_url'
    Example output
    https://api.openshift.com/api/assisted-images/images/41b91e72-c33e-42ee-b80f-b5c5bbf6431a?arch=x86_64&image_token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2NTYwMjYzNzEsInN1YiI6IjQxYjkxZTcyLWMzM2UtNDJlZS1iODBmLWI1YzViYmY2NDMxYSJ9.1EX_VGaMNejMhrAvVRBS7PDPIQtbOOc8LtG8OukE1a4&type=minimal-iso&version=4.11
  6. Download the ISO:

    $ curl -L -s '<iso_url>' --output rhcos-live-minimal.iso (1)
    1 Replace <iso_url> with the URL for the ISO from the previous step.
  7. Boot the new worker host from the downloaded rhcos-live-minimal.iso.

  8. Get the list of hosts in the cluster that are not installed. Keep running the following command until the new host shows up:

    $ curl -s "$API_URL/api/assisted-install/v2/clusters/$CLUSTER_ID" -H "Authorization: Bearer ${JWT_TOKEN}" | jq -r '.hosts[] | select(.status != "installed").id'
    Example output
    2294ba03-c264-4f11-ac08-2f1bb2f8c296
  9. Set the $HOST_ID variable for the new worker node, for example:

    $ HOST_ID=<host_id> (1)
    1 Replace <host_id> with the host ID from the previous step.
  10. Check that the host is ready to install by running the following command:

    Ensure that you copy the entire command including the complete jq expression.

    $ curl -s $API_URL/api/assisted-install/v2/clusters/$CLUSTER_ID -H "Authorization: Bearer ${JWT_TOKEN}" | jq '
    def host_name($host):
        if (.suggested_hostname // "") == "" then
            if (.inventory // "") == "" then
                "Unknown hostname, please wait"
            else
                .inventory | fromjson | .hostname
            end
        else
            .suggested_hostname
        end;
    
    def is_notable($validation):
        ["failure", "pending", "error"] | any(. == $validation.status);
    
    def notable_validations($validations_info):
        [
            $validations_info // "{}"
            | fromjson
            | to_entries[].value[]
            | select(is_notable(.))
        ];
    
    {
        "Hosts validations": {
            "Hosts": [
                .hosts[]
                | select(.status != "installed")
                | {
                    "id": .id,
                    "name": host_name(.),
                    "status": .status,
                    "notable_validations": notable_validations(.validations_info)
                }
            ]
        },
        "Cluster validations info": {
            "notable_validations": notable_validations(.validations_info)
        }
    }
    ' -r
    Example output
    {
      "Hosts validations": {
        "Hosts": [
          {
            "id": "97ec378c-3568-460c-bc22-df54534ff08f",
            "name": "localhost.localdomain",
            "status": "insufficient",
            "notable_validations": [
              {
                "id": "ntp-synced",
                "status": "failure",
                "message": "Host couldn't synchronize with any NTP server"
              },
              {
                "id": "api-domain-name-resolved-correctly",
                "status": "error",
                "message": "Parse error for domain name resolutions result"
              },
              {
                "id": "api-int-domain-name-resolved-correctly",
                "status": "error",
                "message": "Parse error for domain name resolutions result"
              },
              {
                "id": "apps-domain-name-resolved-correctly",
                "status": "error",
                "message": "Parse error for domain name resolutions result"
              }
            ]
          }
        ]
      },
      "Cluster validations info": {
        "notable_validations": []
      }
    }
  11. When the previous command shows that the host is ready, start the installation using the /v2/infra-envs/{infra_env_id}/hosts/{host_id}/actions/install API by running the following command:

    $ curl -X POST -s "$API_URL/api/assisted-install/v2/infra-envs/$INFRA_ENV_ID/hosts/$HOST_ID/actions/install"  -H "Authorization: Bearer ${JWT_TOKEN}"
  12. As the installation proceeds, the installation generates pending certificate signing requests (CSRs) for the worker node.

    You must approve the CSRs to complete the installation.

    Keep running the following API call to monitor the cluster installation:

    $ curl -s "$API_URL/api/assisted-install/v2/clusters/$CLUSTER_ID" -H "Authorization: Bearer ${JWT_TOKEN}" | jq '{
        "Cluster day-2 hosts":
            [
                .hosts[]
                | select(.status != "installed")
                | {id, requested_hostname, status, status_info, progress, status_updated_at, updated_at, infra_env_id, cluster_id, created_at}
            ]
    }'
    Example output
    {
      "Cluster day-2 hosts": [
        {
          "id": "a1c52dde-3432-4f59-b2ae-0a530c851480",
          "requested_hostname": "control-plane-1",
          "status": "added-to-existing-cluster",
          "status_info": "Host has rebooted and no further updates will be posted. Please check console for progress and to possibly approve pending CSRs",
          "progress": {
            "current_stage": "Done",
            "installation_percentage": 100,
            "stage_started_at": "2022-07-08T10:56:20.476Z",
            "stage_updated_at": "2022-07-08T10:56:20.476Z"
          },
          "status_updated_at": "2022-07-08T10:56:20.476Z",
          "updated_at": "2022-07-08T10:57:15.306369Z",
          "infra_env_id": "b74ec0c3-d5b5-4717-a866-5b6854791bd3",
          "cluster_id": "8f721322-419d-4eed-aa5b-61b50ea586ae",
          "created_at": "2022-07-06T22:54:57.161614Z"
        }
      ]
    }
  13. Optional: Run the following command to see all the events for the cluster:

    $ curl -s "$API_URL/api/assisted-install/v2/events?cluster_id=$CLUSTER_ID" -H "Authorization: Bearer ${JWT_TOKEN}" | jq -c '.[] | {severity, message, event_time, host_id}'
    Example output
    {"severity":"info","message":"Host compute-0: updated status from insufficient to known (Host is ready to be installed)","event_time":"2022-07-08T11:21:46.346Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"}
    {"severity":"info","message":"Host compute-0: updated status from known to installing (Installation is in progress)","event_time":"2022-07-08T11:28:28.647Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"}
    {"severity":"info","message":"Host compute-0: updated status from installing to installing-in-progress (Starting installation)","event_time":"2022-07-08T11:28:52.068Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"}
    {"severity":"info","message":"Uploaded logs for host compute-0 cluster 8f721322-419d-4eed-aa5b-61b50ea586ae","event_time":"2022-07-08T11:29:47.802Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"}
    {"severity":"info","message":"Host compute-0: updated status from installing-in-progress to added-to-existing-cluster (Host has rebooted and no further updates will be posted. Please check console for progress and to possibly approve pending CSRs)","event_time":"2022-07-08T11:29:48.259Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"}
    {"severity":"info","message":"Host: compute-0, reached installation stage Rebooting","event_time":"2022-07-08T11:29:48.261Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"}
  14. Log in to the cluster and approve the pending CSRs to complete the installation.

Verification
  • Check that the new worker node was successfully added to the cluster with a status of Ready:

    $ oc get nodes
    Example output
    NAME                           STATUS   ROLES           AGE   VERSION
    control-plane-1.example.com    Ready    master,worker   56m   v1.24.0+beaaed6
    compute-1.example.com          Ready    worker          11m   v1.24.0+beaaed6

Adding worker nodes to single-node OpenShift clusters manually

You can add a worker node to a single-node OpenShift cluster manually by booting the worker node from Red Hat Enterprise Linux CoreOS (RHCOS) ISO and using the cluster worker.ign file to join the new worker node to the cluster.

Prerequisites