example.com
You can install single-node OpenShift using the web-based Assisted Installer and a discovery ISO that you generate using the Assisted Installer. You can also install single-node OpenShift by using coreos-installer
to generate the installation ISO.
To install OpenShift Container Platform on a single node, use the web-based Assisted Installer wizard to guide you through the process and manage the installation.
Installing OpenShift Container Platform on a single node requires a discovery ISO, which the Assisted Installer can generate.
On the administration host, open a browser and navigate to Red Hat OpenShift Cluster Manager.
Click Create Cluster to create a new cluster.
In the Cluster name field, enter a name for the cluster.
In the Base domain field, enter a base domain. For example:
example.com
All DNS records must be subdomains of this base domain and include the cluster name, for example:
<cluster-name>.example.com
You cannot change the base domain or cluster name after cluster installation. |
Select Install single node OpenShift (SNO) and complete the rest of the wizard steps. Download the discovery ISO.
Make a note of the discovery ISO URL for installing with virtual media.
If you enable OpenShift Virtualization during this process, you must have a second local storage device of at least 50GiB for your virtual machines. |
Use the Assisted Installer to install the single-node cluster.
Attach the RHCOS discovery ISO to the target host.
Configure the boot drive order in the server BIOS settings to boot from the attached discovery ISO and then reboot the server.
On the administration host, return to the browser. Wait for the host to appear in the list of discovered hosts. If necessary, reload the Assisted Clusters page and select the cluster name.
Complete the install wizard steps. Add networking details, including a subnet from the available subnets. Add the SSH public key if necessary.
Monitor the installation’s progress. Watch the cluster events. After the installation process finishes writing the operating system image to the server’s hard disk, the server restarts.
Remove the discovery ISO, and reset the server to boot from the installation drive.
The server restarts several times automatically, deploying the control plane.
To install OpenShift Container Platform on a single node, first generate the installation ISO, and then boot the server from the ISO. You can monitor the installation using the openshift-install
installation program.
Installing OpenShift Container Platform on a single node requires an installation ISO, which you can generate with the following procedure.
Install podman
.
See "Requirements for installing OpenShift on a single node" for networking requirements, including DNS records. |
Set the OpenShift Container Platform version:
$ OCP_VERSION=<ocp_version> (1)
1 | Replace <ocp_version> with the current version, for example, latest-4.13 |
Set the host architecture:
$ ARCH=<architecture> (1)
1 | Replace <architecture> with the target host architecture, for example, aarch64 or x86_64 . |
Download the OpenShift Container Platform client (oc
) and make it available for use by entering the following commands:
$ curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/$OCP_VERSION/openshift-client-linux.tar.gz -o oc.tar.gz
$ tar zxf oc.tar.gz
$ chmod +x oc
Download the OpenShift Container Platform installer and make it available for use by entering the following commands:
$ curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/$OCP_VERSION/openshift-install-linux.tar.gz -o openshift-install-linux.tar.gz
$ tar zxvf openshift-install-linux.tar.gz
$ chmod +x openshift-install
Retrieve the RHCOS ISO URL by running the following command:
$ ISO_URL=$(./openshift-install coreos print-stream-json | grep location | grep $ARCH | grep iso | cut -d\" -f4)
Download the RHCOS ISO:
$ curl -L $ISO_URL -o rhcos-live.iso
Prepare the install-config.yaml
file:
apiVersion: v1
baseDomain: <domain> (1)
compute:
- name: worker
replicas: 0 (2)
controlPlane:
name: master
replicas: 1 (3)
metadata:
name: <name> (4)
networking: (5)
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16 (6)
networkType: OVNKubernetes
serviceNetwork:
- 172.30.0.0/16
platform:
none: {}
bootstrapInPlace:
installationDisk: /dev/disk/by-id/<disk_id> (7)
pullSecret: '<pull_secret>' (8)
sshKey: |
<ssh_key> (9)
1 | Add the cluster domain name. |
2 | Set the compute replicas to 0 . This makes the control plane node schedulable. |
3 | Set the controlPlane replicas to 1 . In conjunction with the previous compute setting, this setting ensures the cluster runs on a single node. |
4 | Set the metadata name to the cluster name. |
5 | Set the networking details. OVN-Kubernetes is the only allowed network plugin type for single-node clusters. |
6 | Set the cidr value to match the subnet of the single-node OpenShift cluster. |
7 | Set the path to the installation disk drive, for example, /dev/disk/by-id/wwn-0x64cd98f04fde100024684cf3034da5c2 . |
8 | Copy the pull secret from the Red Hat OpenShift Cluster Manager and add the contents to this configuration setting. |
9 | Add the public SSH key from the administration host so that you can log in to the cluster after installation. |
Generate OpenShift Container Platform assets by running the following commands:
$ mkdir ocp
$ cp install-config.yaml ocp
$ ./openshift-install --dir=ocp create single-node-ignition-config
Embed the ignition data into the RHCOS ISO by running the following commands:
$ alias coreos-installer='podman run --privileged --pull always --rm \
-v /dev:/dev -v /run/udev:/run/udev -v $PWD:/data \
-w /data quay.io/coreos/coreos-installer:release'
$ coreos-installer iso ignition embed -fi ocp/bootstrap-in-place-for-live-iso.ign rhcos-live.iso
See Requirements for installing OpenShift on a single node for more information about installing OpenShift Container Platform on a single node.
See Enabling cluster capabilities for more information about enabling cluster capabilities that were disabled prior to installation.
See Optional cluster capabilities in OpenShift Container Platform 4.13 for more information about the features provided by each capability.
Use openshift-install
to monitor the progress of the single-node cluster installation.
Attach the modified RHCOS installation ISO to the target host.
Configure the boot drive order in the server BIOS settings to boot from the attached discovery ISO and then reboot the server.
On the administration host, monitor the installation by running the following command:
$ ./openshift-install --dir=ocp wait-for install-complete
The server restarts several times while deploying the control plane.
After the installation is complete, check the environment by running the following command:
$ export KUBECONFIG=ocp/auth/kubeconfig
$ oc get nodes
NAME STATUS ROLES AGE VERSION
control-plane.example.com Ready master,worker 10m v1.26.0
The AWS documentation for installer-provisioned installation is written with a high availability cluster consisting of three control plane nodes. When referring to the AWS documentation, consider the differences between the requirements for a single-node OpenShift cluster and a high availability cluster.
The required machines for cluster installation in AWS documentation indicates a temporary bootstrap machine, three control plane machines, and at least two compute machines. You require only a temporary bootstrap machine and one AWS instance for the control plane node and no worker nodes.
The minimum resource requirements for cluster installation in the AWS documentation indicates a control plane node with 4 vCPUs and 100GB of storage. For a single node cluster, you must have a minimum of 8 vCPU cores and 120GB of storage.
The controlPlane.replicas
setting in the install-config.yaml
file should be set to 1
.
The compute.replicas
setting in the install-config.yaml
file should be set to 0
.
This makes the control plane node schedulable.
You can install software using a bootable USB drive that contains an ISO image. Booting the server with the USB drive prepares the server for the software installation.
On the administration host, insert a USB drive into a USB port.
Create a bootable USB drive, for example:
# dd if=<path_to_iso> of=<path_to_usb> status=progress
where:
is the relative path to the downloaded ISO file, for example, rhcos-live.iso
.
is the location of the connected USB drive, for example, /dev/sdb
.
After the ISO is copied to the USB drive, you can use the USB drive to install software on the server.
You can provision hosts in your network using ISOs that you install using the Redfish Baseboard Management Controller (BMC) API.
This example procedure demonstrates the steps on a Dell server. |
Ensure that you have the latest firmware version of iDRAC that is compatible with your hardware. If you have any issues with the hardware or firmware, you must contact the provider. |
Download the installation Red Hat Enterprise Linux CoreOS (RHCOS) ISO.
Use a Dell PowerEdge server that is compatible with iDRAC9.
Copy the ISO file to an HTTP server accessible in your network.
Boot the host from the hosted ISO file, for example:
Call the Redfish API to set the hosted ISO as the VirtualMedia
boot media by running the following command:
$ curl -k -u <bmc_username>:<bmc_password> -d '{"Image":"<hosted_iso_file>", "Inserted": true}' -H "Content-Type: application/json" -X POST <host_bmc_address>/redfish/v1/Managers/iDRAC.Embedded.1/VirtualMedia/CD/Actions/VirtualMedia.InsertMedia
Where:
Is the username and password for the target host BMC.
Is the URL for the hosted installation ISO, for example: http://webserver.example.com/rhcos-live-minimal.iso
. The ISO must be accessible from the target host machine.
Is the BMC IP address of the target host machine.
Set the host to boot from the VirtualMedia
device by running the following command:
$ curl -k -u <bmc_username>:<bmc_password> -X PATCH -H 'Content-Type: application/json' -d '{"Boot": {"BootSourceOverrideTarget": "Cd", "BootSourceOverrideMode": "UEFI", "BootSourceOverrideEnabled": "Once"}}' <host_bmc_address>/redfish/v1/Systems/System.Embedded.1
Reboot the host:
$ curl -k -u <bmc_username>:<bmc_password> -d '{"ResetType": "ForceRestart"}' -H 'Content-type: application/json' -X POST <host_bmc_address>/redfish/v1/Systems/System.Embedded.1/Actions/ComputerSystem.Reset
Optional: If the host is powered off, you can boot it using the {"ResetType": "On"}
switch. Run the following command:
$ curl -k -u <bmc_username>:<bmc_password> -d '{"ResetType": "On"}' -H 'Content-type: application/json' -X POST <host_bmc_address>/redfish/v1/Systems/System.Embedded.1/Actions/ComputerSystem.Reset
In some cases, you cannot attach an external disk drive to a server, however, you need to access the server remotely to provision a node. It is recommended to enable SSH access to the server. You can create a live RHCOS ISO with SSHd enabled and with predefined credentials so that you can access the server after it boots.
You installed the butane
utility.
Download the coreos-installer
binary from the coreos-installer
image mirror page.
Download the latest live RHCOS ISO from mirror.openshift.com.
Create the embedded.yaml
file that the butane
utility uses to create the Ignition file:
variant: openshift
version: 4.13.0
metadata:
name: sshd
labels:
machineconfiguration.openshift.io/role: worker
passwd:
users:
- name: core (1)
ssh_authorized_keys:
- '<ssh_key>'
1 | The core user has sudo privileges. |
Run the butane
utility to create the Ignition file using the following command:
$ butane -pr embedded.yaml -o embedded.ign
After the Ignition file is created, you can include the configuration in a new live RHCOS ISO, which is named rhcos-sshd-4.13.0-x86_64-live.x86_64.iso
, with the coreos-installer
utility:
$ coreos-installer iso ignition embed -i embedded.ign rhcos-4.13.0-x86_64-live.x86_64.iso -o rhcos-sshd-4.13.0-x86_64-live.x86_64.iso
Check that the custom live ISO can be used to boot the server by running the following command:
# coreos-installer iso ignition show rhcos-sshd-4.13.0-x86_64-live.x86_64.iso
{
"ignition": {
"version": "3.2.0"
},
"passwd": {
"users": [
{
"name": "core",
"sshAuthorizedKeys": [
"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCZnG8AIzlDAhpyENpK2qKiTT8EbRWOrz7NXjRzopbPu215mocaJgjjwJjh1cYhgPhpAp6M/ttTk7I4OI7g4588Apx4bwJep6oWTU35LkY8ZxkGVPAJL8kVlTdKQviDv3XX12l4QfnDom4tm4gVbRH0gNT1wzhnLP+LKYm2Ohr9D7p9NBnAdro6k++XWgkDeijLRUTwdEyWunIdW1f8G0Mg8Y1Xzr13BUo3+8aey7HLKJMDtobkz/C8ESYA/f7HJc5FxF0XbapWWovSSDJrr9OmlL9f4TfE+cQk3s+eoKiz2bgNPRgEEwihVbGsCN4grA+RzLCAOpec+2dTJrQvFqsD alosadag@sonnelicht.local"
]
}
]
}
}