×

Single-node OpenShift clusters reduce the host prerequisites for deployment to a single host. This is useful for deployments in constrained environments or at the network edge. However, sometimes you need to add additional capacity to your cluster, for example, in telecommunications and network edge scenarios. In these scenarios, you can add worker nodes to the single-node cluster.

There are several ways that you can add worker nodes to a single-node cluster. You can add worker nodes to a cluster manually, using Red Hat OpenShift Cluster Manager, or by using the Assisted Installer REST API directly.

Adding worker nodes does not expand the cluster control plane, and it does not provide high availability to your cluster. For single-node OpenShift clusters, high availability is handled by failing over to another site. It is not recommended to add a large number of worker nodes to a single-node cluster.

Unlike multi-node clusters, by default all ingress traffic is routed to the single control-plane node, even after adding additional worker nodes.

Requirements for installing single-node OpenShift worker nodes

To install a single-node OpenShift worker node, you must address the following requirements:

  • Administration host: You must have a computer to prepare the ISO and to monitor the installation.

  • Production-grade server: Installing single-node OpenShift worker nodes requires a server with sufficient resources to run OpenShift Container Platform services and a production workload.

    Table 1. Minimum resource requirements
    Profile vCPU Memory Storage

    Minimum

    2 vCPU cores

    8GB of RAM

    100GB

    One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio:

    (threads per core × cores) × sockets = vCPUs

    The server must have a Baseboard Management Controller (BMC) when booting with virtual media.

  • Networking: The worker node server must have access to the internet or access to a local registry if it is not connected to a routable network. The worker node server must have a DHCP reservation or a static IP address and be able to access the single-node OpenShift cluster Kubernetes API, ingress route, and cluster node domain names. You must configure the DNS to resolve the IP address to each of the following fully qualified domain names (FQDN) for the single-node OpenShift cluster:

    Table 2. Required DNS records
    Usage FQDN Description

    Kubernetes API

    api.<cluster_name>.<base_domain>

    Add a DNS A/AAAA or CNAME record. This record must be resolvable by clients external to the cluster.

    Internal API

    api-int.<cluster_name>.<base_domain>

    Add a DNS A/AAAA or CNAME record when creating the ISO manually. This record must be resolvable by nodes within the cluster.

    Ingress route

    *.apps.<cluster_name>.<base_domain>

    Add a wildcard DNS A/AAAA or CNAME record that targets the node. This record must be resolvable by clients external to the cluster.

    Without persistent IP addresses, communications between the apiserver and etcd might fail.

Adding worker nodes using the Assisted Installer and OpenShift Cluster Manager

You can add worker nodes to single-node OpenShift clusters that were created on Red Hat OpenShift Cluster Manager using the Assisted Installer.

Adding worker nodes to single-node OpenShift clusters is only supported for clusters running OpenShift Container Platform version 4.11 and up.

Prerequisites
  • Have access to a single-node OpenShift cluster installed using Assisted Installer.

  • Install the OpenShift CLI (oc).

  • Log in as a user with cluster-admin privileges.

  • Ensure that all the required DNS records exist for the cluster that you are adding the worker node to.

Procedure
  1. Log in to OpenShift Cluster Manager and click the single-node cluster that you want to add a worker node to.

  2. Click Add hosts, and download the discovery ISO for the new worker node, adding SSH public key and configuring cluster-wide proxy settings as required.

  3. Boot the target host using the discovery ISO, and wait for the host to be discovered in the console. After the host is discovered, start the installation.

  4. As the installation proceeds, the installation generates pending certificate signing requests (CSRs) for the worker node. When prompted, approve the pending CSRs to complete the installation.

    When the worker node is sucessfully installed, it is listed as a worker node in the cluster web console.

Adding worker nodes using the Assisted Installer API

You can add worker nodes to single-node OpenShift clusters using the Assisted Installer REST API. Before you add worker nodes, you must log in to OpenShift Cluster Manager and authenticate against the API.

Authenticating against the Assisted Installer REST API

Before you can use the Assisted Installer REST API, you must authenticate against the API using a JSON web token (JWT) that you generate.

Prerequisites
Procedure
  1. Log in to OpenShift Cluster Manager and copy your API token.

  2. Set the $OFFLINE_TOKEN variable using the copied API token by running the following command:

    $ export OFFLINE_TOKEN=<copied_api_token>
  3. Set the $JWT_TOKEN variable using the previously set $OFFLINE_TOKEN variable:

    $ export JWT_TOKEN=$(
      curl \
      --silent \
      --header "Accept: application/json" \
      --header "Content-Type: application/x-www-form-urlencoded" \
      --data-urlencode "grant_type=refresh_token" \
      --data-urlencode "client_id=cloud-services" \
      --data-urlencode "refresh_token=${OFFLINE_TOKEN}" \
      "https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token" \
      | jq --raw-output ".access_token"
    )

    The JWT token is valid for 15 minutes only.

Verification
  • Optional: Check that you can access the API by running the following command:

    $ curl -s https://api.openshift.com/api/assisted-install/v2/component-versions -H "Authorization: Bearer ${JWT_TOKEN}" | jq
    Example output
    {
        "release_tag": "v2.5.1",
        "versions":
        {
            "assisted-installer": "registry.redhat.io/rhai-tech-preview/assisted-installer-rhel8:v1.0.0-175",
            "assisted-installer-controller": "registry.redhat.io/rhai-tech-preview/assisted-installer-reporter-rhel8:v1.0.0-223",
            "assisted-installer-service": "quay.io/app-sre/assisted-service:ac87f93",
            "discovery-agent": "registry.redhat.io/rhai-tech-preview/assisted-installer-agent-rhel8:v1.0.0-156"
        }
    }

Adding worker nodes using the Assisted Installer REST API

You can add worker nodes to clusters using the Assisted Installer REST API.

Prerequisites
  • Install the OpenShift Cluster Manager CLI (ocm).

  • Log in to OpenShift Cluster Manager as a user with cluster creation privileges.

  • Install jq.

  • Ensure that all the required DNS records exist for the cluster that you are adding the worker node to.

Procedure
  1. Authenticate against the Assisted Installer REST API and generate a JSON web token (JWT) for your session. The generated JWT token is valid for 15 minutes only.

  2. Set the $API_URL variable by running the following command:

    $ export API_URL=<api_url> (1)
    1 Replace <api_url> with the Assisted Installer API URL, for example, https://api.openshift.com
  3. Import the single-node OpenShift cluster by running the following commands:

    1. Set the $OPENSHIFT_CLUSTER_ID variable. Log in to the cluster and run the following command:

      $ export OPENSHIFT_CLUSTER_ID=$(oc get clusterversion -o jsonpath='{.items[].spec.clusterID}')
    2. Set the $CLUSTER_REQUEST variable that is used to import the cluster:

      $ export CLUSTER_REQUEST=$(jq --null-input --arg openshift_cluster_id "$OPENSHIFT_CLUSTER_ID" '{
        "api_vip_dnsname": "<api_vip>", (1)
        "openshift_cluster_id": $openshift_cluster_id,
        "name": "<openshift_cluster_name>" (2)
      }')
      1 Replace <api_vip> with the hostname for the cluster’s API server. This can be the DNS domain for the API server or the IP address of the single node which the worker node can reach. For example, api.compute-1.example.com.
      2 Replace <openshift_cluster_name> with the plain text name for the cluster. The cluster name should match the cluster name that was set during the Day 1 cluster installation.
    3. Import the cluster and set the $CLUSTER_ID variable. Run the following command:

      $ CLUSTER_ID=$(curl "$API_URL/api/assisted-install/v2/clusters/import" -H "Authorization: Bearer ${JWT_TOKEN}" -H 'accept: application/json' -H 'Content-Type: application/json' \
        -d "$CLUSTER_REQUEST" | tee /dev/stderr | jq -r '.id')
  4. Generate the InfraEnv resource for the cluster and set the $INFRA_ENV_ID variable by running the following commands:

    1. Download the pull secret file from Red Hat OpenShift Cluster Manager at console.redhat.com.

    2. Set the $INFRA_ENV_REQUEST variable:

      export INFRA_ENV_REQUEST=$(jq --null-input \
          --slurpfile pull_secret <path_to_pull_secret_file> \(1)
          --arg ssh_pub_key "$(cat <path_to_ssh_pub_key>)" \(2)
          --arg cluster_id "$CLUSTER_ID" '{
        "name": "<infraenv_name>", (3)
        "pull_secret": $pull_secret[0] | tojson,
        "cluster_id": $cluster_id,
        "ssh_authorized_key": $ssh_pub_key,
        "image_type": "<iso_image_type>" (4)
      }')
      1 Replace <path_to_pull_secret_file> with the path to the local file containing the downloaded pull secret from Red Hat OpenShift Cluster Manager at console.redhat.com.
      2 Replace <path_to_ssh_pub_key> with the path to the public SSH key required to access the host. If you do not set this value, you cannot access the host while in discovery mode.
      3 Replace <infraenv_name> with the plain text name for the InfraEnv resource.
      4 Replace <iso_image_type> with the ISO image type, either full-iso or minimal-iso.
    3. Post the $INFRA_ENV_REQUEST to the