$ sudo dnf install /usr/bin/nmstatectl -y
Use the following procedures to install an OpenShift Container Platform cluster using the Agent-based Installer.
You reviewed details about the OpenShift Container Platform installation and update processes.
You read the documentation on selecting a cluster installation method and preparing it for users.
If you use a firewall or proxy, you configured it to allow the sites that your cluster requires access to.
The following procedures deploy a single-node OpenShift Container Platform in a disconnected environment. You can use these procedures as a basis and modify according to your requirements.
Use this procedure to download the Agent-based Installer and the CLI needed for your installation.
Log in to the OpenShift Container Platform web console using your login credentials.
Navigate to Datacenter.
Click Run Agent-based Installer locally.
Select the operating system and architecture for the OpenShift Installer and Command line interface.
Click Download Installer to download and extract the install program.
Download or copy the pull secret by clicking on Download pull secret or Copy pull secret.
Click Download command-line tools and place the openshift-install
binary in a directory that is on your PATH
.
Use this procedure to create the preferred configuration inputs used to create the agent image.
Install nmstate
dependency by running the following command:
$ sudo dnf install /usr/bin/nmstatectl -y
Place the openshift-install
binary in a directory that is on your PATH
.
Create a directory to store the install configuration by running the following command:
$ mkdir ~/<directory_name>
This is the preferred method for the Agent-based installation. Using GitOps ZTP manifests is optional. |
Create the install-config.yaml
file by running the following command:
$ cat << EOF > ./<directory_name>/install-config.yaml
apiVersion: v1
baseDomain: test.example.com
compute:
- architecture: amd64 (1)
hyperthreading: Enabled
name: worker
replicas: 0
controlPlane:
architecture: amd64
hyperthreading: Enabled
name: master
replicas: 1
metadata:
name: sno-cluster (2)
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 192.168.0.0/16
networkType: OVNKubernetes (3)
serviceNetwork:
- 172.30.0.0/16
platform:
none: {}
pullSecret: '<pull_secret>' (4)
sshKey: '<ssh_pub_key>' (5)
EOF
1 | Specify the system architecture, valid values are amd64 and arm64 . |
2 | Required. Specify your cluster name. |
3 | Specify the cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . |
4 | Specify your pull secret. |
5 | Specify your SSH public key. |
If you set the platform to
IPv6 is supported only on bare metal platforms. |
networking:
clusterNetwork:
- cidr: 172.21.0.0/16
hostPrefix: 23
- cidr: fd02::/48
hostPrefix: 64
machineNetwork:
- cidr: 192.168.11.0/16
- cidr: 2001:DB8::/32
serviceNetwork:
- 172.22.0.0/16
- fd03::/112
networkType: OVNKubernetes
platform:
baremetal:
apiVIPs:
- 192.168.11.3
- 2001:DB8::4
ingressVIPs:
- 192.168.11.4
- 2001:DB8::5
When you use a disconnected mirror registry, you must add the certificate file that you created previously for your mirror registry to the |
Create the agent-config.yaml
file by running the following command:
$ cat > agent-config.yaml << EOF
apiVersion: v1alpha1
kind: AgentConfig
metadata:
name: sno-cluster
rendezvousIP: 192.168.111.80 (1)
hosts: (2)
- hostname: master-0 (3)
interfaces:
- name: eno1
macAddress: 00:ef:44:21:e6:a5
rootDeviceHints: (4)
deviceName: /dev/sdb
networkConfig: (5)
interfaces:
- name: eno1
type: ethernet
state: up
mac-address: 00:ef:44:21:e6:a5
ipv4:
enabled: true
address:
- ip: 192.168.111.80
prefix-length: 23
dhcp: false
dns-resolver:
config:
server:
- 192.168.111.1
routes:
config:
- destination: 0.0.0.0/0
next-hop-address: 192.168.111.2
next-hop-interface: eno1
table-id: 254
EOF
1 | This IP address is used to determine which node performs the bootstrapping process as well as running the assisted-service component.
You must provide the rendezvous IP address when you do not specify at least one host’s IP address in the networkConfig parameter. If this address is not provided, one IP address is selected from the provided hosts' networkConfig . |
2 | Optional: Host configuration. The number of hosts defined must not exceed the total number of hosts defined in the install-config.yaml file, which is the sum of the values of the compute.replicas and controlPlane.replicas parameters. |
3 | Optional: Overrides the hostname obtained from either the Dynamic Host Configuration Protocol (DHCP) or a reverse DNS lookup. Each host must have a unique hostname supplied by one of these methods. |
4 | Enables provisioning of the Red Hat Enterprise Linux CoreOS (RHCOS) image to a particular device. It examines the devices in the order it discovers them, and compares the discovered values with the hint values. It uses the first discovered device that matches the hint value. |
5 | Optional: Configures the network interface of a host in NMState format. |
As an optional task, you can use GitOps Zero Touch Provisioning (ZTP) manifests to configure your installation beyond the options available through the install-config.yaml
and agent-config.yaml
files.
GitOps ZTP manifests can be generated with or without configuring the |
You have placed the openshift-install
binary in a directory that is on your PATH
.
Optional: You have created and configured the install-config.yaml
and agent-config.yaml
files.
Generate ZTP cluster manifests by running the following command:
$ openshift-install agent create cluster-manifests --dir <installation_directory>
If you have created the Any configurations made to the |
Navigate to the cluster-manifests
directory by running the following command:
$ cd <installation_directory>/cluster-manifests
Configure the manifest files in the cluster-manifests
directory.
For sample files, see the "Sample GitOps ZTP custom resources" section.
Disconnected clusters: If you did not define mirror configuration in the install-config.yaml
file before generating the ZTP manifests, perform the following steps:
Navigate to the mirror
directory by running the following command:
$ cd ../mirror
Configure the manifest files in the mirror
directory.
See Challenges of the network far edge to learn more about GitOps ZTP.
Use this procedure to boot the agent image on your machines.
Create the agent image by running the following command:
$ openshift-install --dir <install_directory> agent create image
Red Hat Enterprise Linux CoreOS (RHCOS) supports multipathing on the primary disk, allowing stronger resilience to hardware failure to achieve higher host availability. Multipathing is enabled by default in the agent ISO image, with a default |
Boot the agent.x86_64.iso
or agent.aarch64.iso
image on the bare metal machines.
After you boot the agent image and network services are made available to the host, the agent console application performs a pull check to verify that the current host can retrieve release images.
If the primary pull check passes, you can quit the application to continue with the installation. If the pull check fails, the application performs additional checks, as seen in the Additional checks
section of the TUI, to help you troubleshoot the problem. A failure for any of the additional checks is not necessarily critical as long as the primary pull check succeeds.
If there are host network configuration issues that might cause an installation to fail, you can use the console application to make adjustments to your network configurations.
If the agent console application detects host network configuration issues, the installation workflow will be halted until the user manually stops the console application and signals the intention to proceed. |
Wait for the agent console application to check whether or not the configured release image can be pulled from a registry.
If the agent console application states that the installer connectivity checks have passed, wait for the prompt to time out to continue with the installation.
You can still choose to view or change network configuration settings even if the connectivity checks have passed. However, if you choose to interact with the agent console application rather than letting it time out, you must manually quit the TUI to proceed with the installation. |
If the agent console application checks have failed, which is indicated by a red icon beside the Release image URL
pull check, use the following steps to reconfigure the host’s network settings:
Read the Check Errors
section of the TUI.
This section displays error messages specific to the failed checks.
Select Configure network to launch the NetworkManager TUI.
Select Edit a connection and select the connection you want to reconfigure.
Edit the configuration and select OK to save your changes.
Select Back to return to the main screen of the NetworkManager TUI.
Select Activate a Connection.
Select the reconfigured network to deactivate it.
Select the reconfigured network again to reactivate it.
Select Back and then select Quit to return to the agent console application.
Wait at least five seconds for the continuous network checks to restart using the new network configuration.
If the Release image URL
pull check succeeds and displays a green icon beside the URL, select Quit to exit the agent console application and continue with the installation.
Use the following procedure to track installation progress and to verify a successful installation.
You have configured a DNS record for the Kubernetes API server.
Optional: To know when the bootstrap host (rendezvous host) reboots, run the following command:
$ ./openshift-install --dir <install_directory> agent wait-for bootstrap-complete \ (1)
--log-level=info (2)
1 | For <install_directory> , specify the path to the directory where the agent ISO was generated. |
2 | To view different installation details, specify warn , debug , or error instead of info . |
...................................................................
...................................................................
INFO Bootstrap configMap status is complete
INFO cluster bootstrap is complete
The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines.
To track the progress and verify successful installation, run the following command:
$ openshift-install --dir <install_directory> agent wait-for install-complete (1)
1 | For <install_directory> directory, specify the path to the directory where the agent ISO was generated. |
...................................................................
...................................................................
INFO Cluster is installed
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run
INFO export KUBECONFIG=/home/core/installer/auth/kubeconfig
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.sno-cluster.test.example.com
If you are using the optional method of GitOps ZTP manifests, you can configure IP address endpoints for cluster nodes through the
IPv6 is supported only on bare metal platforms. |
apiVIP: 192.168.11.3
ingressVIP: 192.168.11.4
clusterDeploymentRef:
name: mycluster
imageSetRef:
name: openshift-4.13
networking:
clusterNetwork:
- cidr: 172.21.0.0/16
hostPrefix: 23
- cidr: fd02::/48
hostPrefix: 64
machineNetwork:
- cidr: 192.168.11.0/16
- cidr: 2001:DB8::/32
serviceNetwork:
- 172.22.0.0/16
- fd03::/112
networkType: OVNKubernetes
See Configuring a three-node cluster to deploy three-node clusters in bare metal environments.
You can optionally use GitOps Zero Touch Provisioning (ZTP) custom resource (CR) objects to install an OpenShift Container Platform cluster with the Agent-based Installer.
You can customize the following GitOps ZTP custom resources to specify more details about your OpenShift Container Platform cluster. The following sample GitOps ZTP custom resources are for a single-node cluster.
agent-cluster-install.yaml
file apiVersion: extensions.hive.openshift.io/v1beta1
kind: AgentClusterInstall
metadata:
name: test-agent-cluster-install
namespace: cluster0
spec:
clusterDeploymentRef:
name: ostest
imageSetRef:
name: openshift-4.13
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
serviceNetwork:
- 172.30.0.0/16
provisionRequirements:
controlPlaneAgents: 1
workerAgents: 0
sshPublicKey: <ssh_public_key>
cluster-deployment.yaml
fileapiVersion: hive.openshift.io/v1
kind: ClusterDeployment
metadata:
name: ostest
namespace: cluster0
spec:
baseDomain: test.metalkube.org
clusterInstallRef:
group: extensions.hive.openshift.io
kind: AgentClusterInstall
name: test-agent-cluster-install
version: v1beta1
clusterName: ostest
controlPlaneConfig:
servingCertificates: {}
platform:
agentBareMetal:
agentSelector:
matchLabels:
bla: aaa
pullSecretRef:
name: pull-secret
cluster-image-set.yaml
fileapiVersion: hive.openshift.io/v1
kind: ClusterImageSet
metadata:
name: openshift-4.13
spec:
releaseImage: registry.ci.openshift.org/ocp/release:4.13.0-0.nightly-2022-06-06-025509
infra-env.yaml
fileapiVersion: agent-install.openshift.io/v1beta1
kind: InfraEnv
metadata:
name: myinfraenv
namespace: cluster0
spec:
clusterRef:
name: ostest
namespace: cluster0
cpuArchitecture: aarch64
pullSecretRef:
name: pull-secret
sshAuthorizedKey: <ssh_public_key>
nmStateConfigLabelSelector:
matchLabels:
cluster0-nmstate-label-name: cluster0-nmstate-label-value
nmstateconfig.yaml
fileapiVersion: agent-install.openshift.io/v1beta1
kind: NMStateConfig
metadata:
name: master-0
namespace: openshift-machine-api
labels:
cluster0-nmstate-label-name: cluster0-nmstate-label-value
spec:
config:
interfaces:
- name: eth0
type: ethernet
state: up
mac-address: 52:54:01:aa:aa:a1
ipv4:
enabled: true
address:
- ip: 192.168.122.2
prefix-length: 23
dhcp: false
dns-resolver:
config:
server:
- 192.168.122.1
routes:
config:
- destination: 0.0.0.0/0
next-hop-address: 192.168.122.1
next-hop-interface: eth0
table-id: 254
interfaces:
- name: "eth0"
macAddress: 52:54:01:aa:aa:a1
pull-secret.yaml
fileapiVersion: v1
kind: Secret
type: kubernetes.io/dockerconfigjson
metadata:
name: pull-secret
namespace: cluster0
stringData:
.dockerconfigjson: <pull_secret>
See Challenges of the network far edge to learn more about GitOps Zero Touch Provisioning (ZTP).
Use the following procedure to gather log data about a failed Agent-based installation to provide for a support case.
You have configured a DNS record for the Kubernetes API server.
Run the following command and collect the output:
$ ./openshift-install --dir <installation_directory> agent wait-for bootstrap-complete --log-level=debug
...
ERROR Bootstrap failed to complete: : bootstrap process timed out: context deadline exceeded
If the output from the previous command indicates a failure, or if the bootstrap is not progressing, run the following command to connect to the rendezvous host and collect the output:
$ ssh core@<node-ip> agent-gather -O >agent-gather.tar.xz
Red Hat Support can diagnose most issues using the data gathered from the rendezvous host, but if some hosts are not able to register, gathering this data from every host might be helpful. |
If the bootstrap completes and the cluster nodes reboot, run the following command and collect the output:
$ ./openshift-install --dir <install_directory> agent wait-for install-complete --log-level=debug
If the output from the previous command indicates a failure, perform the following steps:
Export the kubeconfig
file to your environment by running the following command:
$ export KUBECONFIG=<install_directory>/auth/kubeconfig
Gather information for debugging by running the following command:
$ oc adm must-gather
Create a compressed file from the must-gather
directory that was just created in your working directory by running the following command:
$ tar cvaf must-gather.tar.gz <must_gather_directory>
Excluding the /auth
subdirectory, attach the installation directory used during the deployment to your support case on the Red Hat Customer Portal.
Attach all other data gathered from this procedure to your support case.