Installing single-node OpenShift using the Assisted Installer

To install OpenShift Container Platform on a single node, use the web-based Assisted Installer wizard to guide you through the process and manage the installation.

Generating the discovery ISO with the Assisted Installer

Installing OpenShift Container Platform on a single node requires a discovery ISO, which the Assisted Installer can generate.

  1. On the administration host, open a browser and navigate to Red Hat OpenShift Cluster Manager.

  2. Click Create Cluster to create a new cluster.

  3. In the Cluster name field, enter a name for the cluster.

  4. In the Base domain field, enter a base domain. For example:


    All DNS records must be subdomains of this base domain and include the cluster name, for example:


    You cannot change the base domain or cluster name after cluster installation.

  5. Select Install single node OpenShift (SNO) and complete the rest of the wizard steps. Download the discovery ISO.

  6. Make a note of the discovery ISO URL for installing with virtual media.

If you enable OpenShift Virtualization during this process, you must have a second local storage device of at least 50GiB for your virtual machines.

Installing single-node OpenShift with the Assisted Installer

Use the Assisted Installer to install the single-node cluster.

  1. Attach the RHCOS discovery ISO to the target host.

  2. Configure the boot drive order in the server BIOS settings to boot from the attached discovery ISO and then reboot the server.

  3. On the administration host, return to the browser. Wait for the host to appear in the list of discovered hosts. If necessary, reload the Assisted Clusters page and select the cluster name.

  4. Complete the install wizard steps. Add networking details, including a subnet from the available subnets. Add the SSH public key if necessary.

  5. Monitor the installation’s progress. Watch the cluster events. After the installation process finishes writing the operating system image to the server’s hard disk, the server restarts.

  6. Remove the discovery ISO, and reset the server to boot from the installation drive.

    The server restarts several times automatically, deploying the control plane.

Installing single-node OpenShift manually

To install OpenShift Container Platform on a single node, first generate the installation ISO, and then boot the server from the ISO. You can monitor the installation using the openshift-install installation program.

Generating the installation ISO with coreos-installer

Installing OpenShift Container Platform on a single node requires an installation ISO, which you can generate with the following procedure.

  • Install podman.

See "Requirements for installing OpenShift on a single node" for networking requirements, including DNS records.

  1. Set the OpenShift Container Platform version:

    $ OCP_VERSION=<ocp_version> (1)
    1 Replace <ocp_version> with the current version, for example, latest-4.13
  2. Set the host architecture:

    $ ARCH=<architecture> (1)
    1 Replace <architecture> with the target host architecture, for example, aarch64 or x86_64.
  3. Download the OpenShift Container Platform client (oc) and make it available for use by entering the following commands:

    $ curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/$OCP_VERSION/openshift-client-linux.tar.gz -o oc.tar.gz
    $ tar zxf oc.tar.gz
    $ chmod +x oc
  4. Download the OpenShift Container Platform installer and make it available for use by entering the following commands:

    $ curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/$OCP_VERSION/openshift-install-linux.tar.gz -o openshift-install-linux.tar.gz
    $ tar zxvf openshift-install-linux.tar.gz
    $ chmod +x openshift-install
  5. Retrieve the RHCOS ISO URL by running the following command:

    $ ISO_URL=$(./openshift-install coreos print-stream-json | grep location | grep $ARCH | grep iso | cut -d\" -f4)
  6. Download the RHCOS ISO:

    $ curl -L $ISO_URL -o rhcos-live.iso
  7. Prepare the install-config.yaml file:

    apiVersion: v1
    baseDomain: <domain> (1)
    - name: worker
      replicas: 0 (2)
      name: master
      replicas: 1 (3)
      name: <name> (4)
    networking: (5)
      - cidr:
        hostPrefix: 23
      - cidr: (6)
      networkType: OVNKubernetes
      none: {}
      installationDisk: /dev/disk/by-id/<disk_id> (7)
    pullSecret: '<pull_secret>' (8)
    sshKey: |
      <ssh_key> (9)
    1 Add the cluster domain name.
    2 Set the compute replicas to 0. This makes the control plane node schedulable.
    3 Set the controlPlane replicas to 1. In conjunction with the previous compute setting, this setting ensures the cluster runs on a single node.
    4 Set the metadata name to the cluster name.
    5 Set the networking details. OVN-Kubernetes is the only allowed network plugin type for single-node clusters.
    6 Set the cidr value to match the subnet of the single-node OpenShift cluster.
    7 Set the path to the installation disk drive, for example, /dev/disk/by-id/wwn-0x64cd98f04fde100024684cf3034da5c2.
    8 Copy the pull secret from the Red Hat OpenShift Cluster Manager and add the contents to this configuration setting.
    9 Add the public SSH key from the administration host so that you can log in to the cluster after installation.
  8. Generate OpenShift Container Platform assets by running the following commands:

    $ mkdir ocp
    $ cp install-config.yaml ocp
    $ ./openshift-install --dir=ocp create single-node-ignition-config
  9. Embed the ignition data into the RHCOS ISO by running the following commands:

    $ alias coreos-installer='podman run --privileged --pull always --rm \
            -v /dev:/dev -v /run/udev:/run/udev -v $PWD:/data \
            -w /data quay.io/coreos/coreos-installer:release'
    $ coreos-installer iso ignition embed -fi ocp/bootstrap-in-place-for-live-iso.ign rhcos-live.iso
Additional resources

Monitoring the cluster installation using openshift-install

Use openshift-install to monitor the progress of the single-node cluster installation.

  1. Attach the modified RHCOS installation ISO to the target host.

  2. Configure the boot drive order in the server BIOS settings to boot from the attached discovery ISO and then reboot the server.

  3. On the administration host, monitor the installation by running the following command:

    $ ./openshift-install --dir=ocp wait-for install-complete

    The server restarts several times while deploying the control plane.

  • After the installation is complete, check the environment by running the following command:

    $ export KUBECONFIG=ocp/auth/kubeconfig
    $ oc get nodes
    Example output
    NAME                         STATUS   ROLES           AGE     VERSION
    control-plane.example.com    Ready    master,worker   10m     v1.26.0

Installing single-node OpenShift on AWS

Additional requirements for installing on a single node on AWS

The AWS documentation for installer-provisioned installation is written with a high availability cluster consisting of three control plane nodes. When referring to the AWS documentation, consider the differences between the requirements for a single-node OpenShift cluster and a high availability cluster.

  • The required machines for cluster installation in AWS documentation indicates a temporary bootstrap machine, three control plane machines, and at least two compute machines. You require only a temporary bootstrap machine and one AWS instance for the control plane node and no worker nodes.

  • The minimum resource requirements for cluster installation in the AWS documentation indicates a control plane node with 4 vCPUs and 100GB of storage. For a single node cluster, you must have a minimum of 8 vCPU cores and 120GB of storage.

  • The controlPlane.replicas setting in the install-config.yaml file should be set to 1.

  • The compute.replicas setting in the install-config.yaml file should be set to 0. This makes the control plane node schedulable.

Installing single-node OpenShift on AWS

Installing a single node cluster on AWS requires installer-provisioned installation using the "Installing a cluster on AWS with customizations" procedure.

Creating a bootable ISO image on a USB drive

You can install software using a bootable USB drive that contains an ISO image. Booting the server with the USB drive prepares the server for the software installation.

  1. On the administration host, insert a USB drive into a USB port.

  2. Create a bootable USB drive, for example:

    # dd if=<path_to_iso> of=<path_to_usb> status=progress



    is the relative path to the downloaded ISO file, for example, rhcos-live.iso.


    is the location of the connected USB drive, for example, /dev/sdb.

    After the ISO is copied to the USB drive, you can use the USB drive to install software on the server.

Booting from an HTTP-hosted ISO image using the Redfish API

You can provision hosts in your network using ISOs that you install using the Redfish Baseboard Management Controller (BMC) API.

This example procedure demonstrates the steps on a Dell server.

Ensure that you have the latest firmware version of iDRAC that is compatible with your hardware. If you have any issues with the hardware or firmware, you must contact the provider.

  • Download the installation Red Hat Enterprise Linux CoreOS (RHCOS) ISO.

  • Use a Dell PowerEdge server that is compatible with iDRAC9.

  1. Copy the ISO file to an HTTP server accessible in your network.

  2. Boot the host from the hosted ISO file, for example:

    1. Call the Redfish API to set the hosted ISO as the VirtualMedia boot media by running the following command:

      $ curl -k -u <bmc_username>:<bmc_password> -d '{"Image":"<hosted_iso_file>", "Inserted": true}' -H "Content-Type: application/json" -X POST <host_bmc_address>/redfish/v1/Managers/iDRAC.Embedded.1/VirtualMedia/CD/Actions/VirtualMedia.InsertMedia



      Is the username and password for the target host BMC.


      Is the URL for the hosted installation ISO, for example: http://webserver.example.com/rhcos-live-minimal.iso. The ISO must be accessible from the target host machine.


      Is the BMC IP address of the target host machine.

    2. Set the host to boot from the VirtualMedia device by running the following command:

      $ curl -k -u <bmc_username>:<bmc_password> -X PATCH -H 'Content-Type: application/json' -d '{"Boot": {"BootSourceOverrideTarget": "Cd", "BootSourceOverrideMode": "UEFI", "BootSourceOverrideEnabled": "Once"}}' <host_bmc_address>/redfish/v1/Systems/System.Embedded.1
    3. Reboot the host:

      $ curl -k -u <bmc_username>:<bmc_password> -d '{"ResetType": "ForceRestart"}' -H 'Content-type: application/json' -X POST <host_bmc_address>/redfish/v1/Systems/System.Embedded.1/Actions/ComputerSystem.Reset
    4. Optional: If the host is powered off, you can boot it using the {"ResetType": "On"} switch. Run the following command:

      $ curl -k -u <bmc_username>:<bmc_password> -d '{"ResetType": "On"}' -H 'Content-type: application/json' -X POST <host_bmc_address>/redfish/v1/Systems/System.Embedded.1/Actions/ComputerSystem.Reset

Creating a custom live RHCOS ISO for remote server access

In some cases, you cannot attach an external disk drive to a server, however, you need to access the server remotely to provision a node. It is recommended to enable SSH access to the server. You can create a live RHCOS ISO with SSHd enabled and with predefined credentials so that you can access the server after it boots.

  • You installed the butane utility.

  1. Download the coreos-installer binary from the coreos-installer image mirror page.

  2. Download the latest live RHCOS ISO from mirror.openshift.com.

  3. Create the embedded.yaml file that the butane utility uses to create the Ignition file:

    variant: openshift
    version: 4.13.0
      name: sshd
        machineconfiguration.openshift.io/role: worker
        - name: core (1)
            - '<ssh_key>'
    1 The core user has sudo privileges.
  4. Run the butane utility to create the Ignition file using the following command:

    $ butane -pr embedded.yaml -o embedded.ign
  5. After the Ignition file is created, you can include the configuration in a new live RHCOS ISO, which is named rhcos-sshd-4.13.0-x86_64-live.x86_64.iso, with the coreos-installer utility:

    $ coreos-installer iso ignition embed -i embedded.ign rhcos-4.13.0-x86_64-live.x86_64.iso -o rhcos-sshd-4.13.0-x86_64-live.x86_64.iso
  • Check that the custom live ISO can be used to boot the server by running the following command:

    # coreos-installer iso ignition show rhcos-sshd-4.13.0-x86_64-live.x86_64.iso
    Example output
      "ignition": {
        "version": "3.2.0"
      "passwd": {
        "users": [
            "name": "core",
            "sshAuthorizedKeys": [
              "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCZnG8AIzlDAhpyENpK2qKiTT8EbRWOrz7NXjRzopbPu215mocaJgjjwJjh1cYhgPhpAp6M/ttTk7I4OI7g4588Apx4bwJep6oWTU35LkY8ZxkGVPAJL8kVlTdKQviDv3XX12l4QfnDom4tm4gVbRH0gNT1wzhnLP+LKYm2Ohr9D7p9NBnAdro6k++XWgkDeijLRUTwdEyWunIdW1f8G0Mg8Y1Xzr13BUo3+8aey7HLKJMDtobkz/C8ESYA/f7HJc5FxF0XbapWWovSSDJrr9OmlL9f4TfE+cQk3s+eoKiz2bgNPRgEEwihVbGsCN4grA+RzLCAOpec+2dTJrQvFqsD alosadag@sonnelicht.local"