×

In OpenShift Container Platform version 4.10, you can install a cluster on your VMware vSphere instance using installer-provisioned infrastructure by deploying it to VMware Cloud (VMC) on AWS.

Once you configure your VMC environment for OpenShift Container Platform deployment, you use the OpenShift Container Platform installation program from the bastion management host, co-located in the VMC environment. The installation program and control plane automates the process of deploying and managing the resources needed for the OpenShift Container Platform cluster.

To customize the OpenShift Container Platform installation, you modify parameters in the install-config.yaml file before you install the cluster.

OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported.

Setting up VMC for vSphere

You can install OpenShift Container Platform on VMware Cloud (VMC) on AWS hosted vSphere clusters to enable applications to be deployed and managed both on-premise and off-premise, across the hybrid cloud.

VMC on AWS Architecture

You must configure several options in your VMC environment prior to installing OpenShift Container Platform on VMware vSphere. Ensure your VMC environment has the following prerequisites:

  • Create a non-exclusive, DHCP-enabled, NSX-T network segment and subnet. Other virtual machines (VMs) can be hosted on the subnet, but at least eight IP addresses must be available for the OpenShift Container Platform deployment.

  • Allocate two IP addresses, outside the DHCP range, and configure them with reverse DNS records.

    • A DNS record for api.<cluster_name>.<base_domain> pointing to the allocated IP address.

    • A DNS record for *.apps.<cluster_name>.<base_domain> pointing to the allocated IP address.

  • Configure the following firewall rules:

    • An ANY:ANY firewall rule between the OpenShift Container Platform compute network and the internet. This is used by nodes and applications to download container images.

    • An ANY:ANY firewall rule between the installation host and the software-defined data center (SDDC) management network on port 443. This allows you to upload the Red Hat Enterprise Linux CoreOS (RHCOS) OVA during deployment.

    • An HTTPS firewall rule between the OpenShift Container Platform compute network and vCenter. This connection allows OpenShift Container Platform to communicate with vCenter for provisioning and managing nodes, persistent volume claims (PVCs), and other resources.

  • You must have the following information to deploy OpenShift Container Platform:

    • The OpenShift Container Platform cluster name, such as vmc-prod-1.

    • The base DNS name, such as companyname.com.

    • If not using the default, the pod network CIDR and services network CIDR must be identified, which are set by default to 10.128.0.0/14 and 172.30.0.0/16, respectively. These CIDRs are used for pod-to-pod and pod-to-service communication and are not accessible externally; however, they must not overlap with existing subnets in your organization.

    • The following vCenter information:

      • vCenter hostname, username, and password

      • Datacenter name, such as SDDC-Datacenter

      • Cluster name, such as Cluster-1

      • Network name

      • Datastore name, such as WorkloadDatastore

        It is recommended to move your vSphere cluster to the VMC Compute-ResourcePool resource pool after your cluster installation is finished.

  • A Linux-based host deployed to VMC as a bastion.

    • The bastion host can be Red Hat Enterprise Linux (RHEL) or any another Linux-based host; it must have internet connectivity and the ability to upload an OVA to the ESXi hosts.

    • Download and install the OpenShift CLI tools to the bastion host.

      • The openshift-install installation program

      • The OpenShift CLI (oc) tool

You cannot use the VMware NSX Container Plugin for Kubernetes (NCP), and NSX is not used as the OpenShift SDN. The version of NSX currently available with VMC is incompatible with the version of NCP certified with OpenShift Container Platform.

However, the NSX DHCP service is used for virtual machine IP management with the full-stack automated OpenShift Container Platform deployment and with nodes provisioned, either manually or automatically, by the Machine API integration with vSphere. Additionally, NSX firewall rules are created to enable access with the OpenShift Container Platform cluster and between the bastion host and the VMC vSphere hosts.

VMC Sizer tool

VMware Cloud on AWS is built on top of AWS bare metal infrastructure; this is the same bare metal infrastructure which runs AWS native services. When a VMware cloud on AWS software-defined data center (SDDC) is deployed, you consume these physical server nodes and run the VMware ESXi hypervisor in a single tenant fashion. This means the physical infrastructure is not accessible to anyone else using VMC. It is important to consider how many physical hosts you will need to host your virtual infrastructure.

To determine this, VMware provides the VMC on AWS Sizer. With this tool, you can define the resources you intend to host on VMC:

  • Types of workloads

  • Total number of virtual machines

  • Specification information such as:

    • Storage requirements

    • vCPUs

    • vRAM

    • Overcommit ratios

With these details, the sizer tool can generate a report, based on VMware best practices, and recommend your cluster configuration and the number of hosts you will need.

vSphere prerequisites

Internet access for OpenShift Container Platform

In OpenShift Container Platform 4.10, you require access to the internet to install your cluster.

You must have internet access to:

  • Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.

  • Access Quay.io to obtain the packages that are required to install your cluster.

  • Obtain the packages that are required to perform cluster updates.

If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.

VMware vSphere infrastructure requirements

You must install the OpenShift Container Platform cluster on a VMware vSphere version 7 instance that meets the requirements for the components that you use.

OpenShift Container Platform version 4.10 does not support VMware vSphere version 8.0.

You can host the VMware vSphere infrastructure on-premise or on a VMware Cloud Verified provider that meets the requirements outlined in the following table:

Table 1. Version requirements for vSphere virtual environments
Virtual environment product Required version

VM hardware version

15 or later

vSphere ESXi hosts

7

vCenter host

7

Installing a cluster on VMware vSphere version 7.0 Update 1 is now deprecated. These versions are still fully supported, but version 4.10 of OpenShift Container Platform requires vSphere virtual hardware version 15 or later. Hardware version 15 is now the default for vSphere virtual machines in OpenShift Container Platform. To update the hardware version for your vSphere nodes, see the "Updating hardware on nodes running in vSphere" article.

If your vSphere nodes are below hardware version 15 or your VMware vSphere version is earlier than 6.7U3, upgrading from OpenShift Container Platform 4.10 to OpenShift Container Platform 4.11 is not available.

Table 2. Minimum supported vSphere version for VMware components
Component Minimum supported versions Description

Hypervisor

vSphere 7 with HW version 15

This version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. For more information about supported hardware on the latest version of Red Hat Enterprise Linux (RHEL) that is compatible with RHCOS, see Hardware on the Red Hat Customer Portal.

Storage with in-tree drivers

vSphere 7

This plugin creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform.

If you use a vSphere version 6.5 instance, upgrade to 7.0 before you install OpenShift Container Platform.

You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation.

Network connectivity requirements

You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate.

Review the following details about the required network ports.

Table 3. Ports used for all-machine to all-machine communications
Protocol Port Description

ICMP

N/A

Network reachability tests

TCP

1936

Metrics

9000-9999

Host level services, including the node exporter on ports 9100-9101 and the Cluster Version Operator on port 9099.

10250-10259

The default ports that Kubernetes reserves

10256

openshift-sdn

UDP

4789

virtual extensible LAN (VXLAN)

6081

Geneve

9000-9999

Host level services, including the node exporter on ports 9100-9101.

500

IPsec IKE packets

4500

IPsec NAT-T packets

TCP/UDP

30000-32767

Kubernetes node port

ESP

N/A

IPsec Encapsulating Security Payload (ESP)

Table 4. Ports used for all-machine to control plane communications
Protocol Port Description

TCP

6443

Kubernetes API

Table 5. Ports used for control plane machine to control plane machine communications
Protocol Port Description

TCP

2379-2380

etcd server and peer ports

VMware vSphere CSI Driver Operator requirements

To install the vSphere CSI Driver Operator, the following requirements must be met:

  • VMware vSphere version 6.7U3 or later

  • Virtual machines of hardware version 15 or later

  • No third-party vSphere CSI driver already installed in the cluster

If a third-party vSphere CSI driver is present in the cluster, OpenShift Container Platform does not overwrite it. If you continue with the third-party vSphere CSI driver when upgrading to the next major version of OpenShift Container Platform, the oc CLI prompts you with the following message:

VSphereCSIDriverOperatorCRUpgradeable: VMwareVSphereControllerUpgradeable:
found existing unsupported csi.vsphere.vmware.com driver

The previous message informs you that Red Hat does not support the third-party vSphere CSI driver during an OpenShift Container Platform upgrade operation. You can choose to ignore this message and continue with the upgrade operation.

Additional resources

vCenter requirements

Before you install an OpenShift Container Platform cluster on your vCenter that uses infrastructure that the installer provisions, you must prepare your environment.

Required vCenter account privileges

To install an OpenShift Container Platform cluster in a vCenter, the installation program requires access to an account with privileges to read and create the required resources. Using an account that has global administrative privileges is the simplest way to access all of the necessary permissions.

If you cannot use an account with global administrative privileges, you must create roles to grant the privileges necessary for OpenShift Container Platform cluster installation. While most of the privileges are always required, some are required only if you plan for the installation program to provision a folder to contain the OpenShift Container Platform cluster on your vCenter instance, which is the default behavior. You must create or amend vSphere roles for the specified objects to grant the required privileges.

An additional role is required if the installation program is to create a vSphere virtual machine folder.

Roles and privileges required for installation in vSphere API
vSphere object for role When required Required privileges in vSphere API

vSphere vCenter

Always

Cns.Searchable
InventoryService.Tagging.AttachTag
InventoryService.Tagging.CreateCategory
InventoryService.Tagging.CreateTag
InventoryService.Tagging.DeleteCategory
InventoryService.Tagging.DeleteTag
InventoryService.Tagging.EditCategory
InventoryService.Tagging.EditTag
Sessions.ValidateSession
StorageProfile.Update
StorageProfile.View

vSphere vCenter Cluster

If VMs will be created in the cluster root

Host.Config.Storage
Resource.AssignVMToPool
VApp.AssignResourcePool
VApp.Import
VirtualMachine.Config.AddNewDisk

vSphere vCenter Resource Pool

If an existing resource pool is provided

Host.Config.Storage
Resource.AssignVMToPool
VApp.AssignResourcePool
VApp.Import
VirtualMachine.Config.AddNewDisk

vSphere Datastore

Always

Datastore.AllocateSpace
Datastore.Browse
Datastore.FileManagement
InventoryService.Tagging.ObjectAttachable

vSphere Port Group

Always

Network.Assign

Virtual Machine Folder

Always

InventoryService.Tagging.ObjectAttachable
Resource.AssignVMToPool
VApp.Import
VirtualMachine.Config.AddExistingDisk
VirtualMachine.Config.AddNewDisk
VirtualMachine.Config.AddRemoveDevice
VirtualMachine.Config.AdvancedConfig
VirtualMachine.Config.Annotation
VirtualMachine.Config.CPUCount
VirtualMachine.Config.DiskExtend
VirtualMachine.Config.DiskLease
VirtualMachine.Config.EditDevice
VirtualMachine.Config.Memory
VirtualMachine.Config.RemoveDisk
VirtualMachine.Config.Rename
VirtualMachine.Config.ResetGuestInfo
VirtualMachine.Config.Resource
VirtualMachine.Config.Settings
VirtualMachine.Config.UpgradeVirtualHardware
VirtualMachine.Interact.GuestControl
VirtualMachine.Interact.PowerOff
VirtualMachine.Interact.PowerOn
VirtualMachine.Interact.Reset
VirtualMachine.Inventory.Create
VirtualMachine.Inventory.CreateFromExisting
VirtualMachine.Inventory.Delete
VirtualMachine.Provisioning.Clone
VirtualMachine.Provisioning.MarkAsTemplate
VirtualMachine.Provisioning.DeployTemplate

vSphere vCenter Datacenter

If the installation program creates the virtual machine folder

InventoryService.Tagging.ObjectAttachable
Resource.AssignVMToPool
VApp.Import
VirtualMachine.Config.AddExistingDisk
VirtualMachine.Config.AddNewDisk
VirtualMachine.Config.AddRemoveDevice
VirtualMachine.Config.AdvancedConfig
VirtualMachine.Config.Annotation
VirtualMachine.Config.CPUCount
VirtualMachine.Config.DiskExtend
VirtualMachine.Config.DiskLease
VirtualMachine.Config.EditDevice
VirtualMachine.Config.Memory
VirtualMachine.Config.RemoveDisk
VirtualMachine.Config.Rename
VirtualMachine.Config.ResetGuestInfo
VirtualMachine.Config.Resource
VirtualMachine.Config.Settings
VirtualMachine.Config.UpgradeVirtualHardware
VirtualMachine.Interact.GuestControl
VirtualMachine.Interact.PowerOff
VirtualMachine.Interact.PowerOn
VirtualMachine.Interact.Reset
VirtualMachine.Inventory.Create
VirtualMachine.Inventory.CreateFromExisting
VirtualMachine.Inventory.Delete
VirtualMachine.Provisioning.Clone
VirtualMachine.Provisioning.DeployTemplate
VirtualMachine.Provisioning.MarkAsTemplate
Folder.Create
Folder.Delete

Roles and privileges required for installation in vCenter graphical user interface (GUI)
vSphere object for role When required Required privileges in vCenter GUI

vSphere vCenter

Always

Cns.Searchable
"vSphere Tagging"."Assign or Unassign vSphere Tag"
"vSphere Tagging"."Create vSphere Tag Category"
"vSphere Tagging"."Create vSphere Tag"
vSphere Tagging"."Delete vSphere Tag Category"
"vSphere Tagging"."Delete vSphere Tag"
"vSphere Tagging"."Edit vSphere Tag Category"
"vSphere Tagging"."Edit vSphere Tag"
Sessions."Validate session"
"Profile-driven storage"."Profile-driven storage update"
"Profile-driven storage"."Profile-driven storage view"

vSphere vCenter Cluster

If VMs will be created in the cluster root

Host.Configuration."Storage partition configuration"
Resource."Assign virtual machine to resource pool"
VApp."Assign resource pool"
VApp.Import
"Virtual machine"."Change Configuration"."Add new disk"

vSphere vCenter Resource Pool

If an existing resource pool is provided

Host.Configuration."Storage partition configuration"
Resource."Assign virtual machine to resource pool"
VApp."Assign resource pool"
VApp.Import
"Virtual machine"."Change Configuration"."Add new disk"

vSphere Datastore

Always

Datastore."Allocate space"
Datastore."Browse datastore"
Datastore."Low level file operations"
"vSphere Tagging"."Assign or Unassign vSphere Tag on Object"

vSphere Port Group

Always

Network."Assign network"

Virtual Machine Folder

Always

"vSphere Tagging"."Assign or Unassign vSphere Tag on Object"
Resource."Assign virtual machine to resource pool"
VApp.Import
"Virtual machine"."Change Configuration"."Add existing disk"
"Virtual machine"."Change Configuration"."Add new disk"
"Virtual machine"."Change Configuration"."Add or remove device"
"Virtual machine"."Change Configuration"."Advanced configuration"
"Virtual machine"."Change Configuration"."Set annotation"
"Virtual machine"."Change Configuration"."Change CPU count"
"Virtual machine"."Change Configuration"."Extend virtual disk"
"Virtual machine"."Change Configuration"."Acquire disk lease"
"Virtual machine"."Change Configuration"."Modify device settings"
"Virtual machine"."Change Configuration"."Change Memory"
"Virtual machine"."Change Configuration"."Remove disk"
"Virtual machine"."Change Configuration".Rename
"Virtual machine"."Change Configuration"."Reset guest information"
"Virtual machine"."Change Configuration"."Change resource"
"Virtual machine"."Change Configuration"."Change Settings"
"Virtual machine"."Change Configuration"."Upgrade virtual machine compatibility"
"Virtual machine".Interaction."Guest operating system management by VIX API"
"Virtual machine".Interaction."Power off"
"Virtual machine".Interaction."Power on"
"Virtual machine".Interaction.Reset
"Virtual machine"."Edit Inventory"."Create new"
"Virtual machine"."Edit Inventory"."Create from existing"
"Virtual machine"."Edit Inventory"."Remove"
"Virtual machine".Provisioning."Clone virtual machine"
"Virtual machine".Provisioning."Mark as template"
"Virtual machine".Provisioning."Deploy template"

vSphere vCenter Datacenter

If the installation program creates the virtual machine folder

"vSphere Tagging"."Assign or Unassign vSphere Tag on Object"
Resource."Assign virtual machine to resource pool"
VApp.Import
"Virtual machine"."Change Configuration"."Add existing disk"
"Virtual machine"."Change Configuration"."Add new disk"
"Virtual machine"."Change Configuration"."Add or remove device"
"Virtual machine"."Change Configuration"."Advanced configuration"
"Virtual machine"."Change Configuration"."Set annotation"
"Virtual machine"."Change Configuration"."Change CPU count"
"Virtual machine"."Change Configuration"."Extend virtual disk"
"Virtual machine"."Change Configuration"."Acquire disk lease"
"Virtual machine"."Change Configuration"."Modify device settings"
"Virtual machine"."Change Configuration"."Change Memory"
"Virtual machine"."Change Configuration"."Remove disk"
"Virtual machine"."Change Configuration".Rename
"Virtual machine"."Change Configuration"."Reset guest information"
"Virtual machine"."Change Configuration"."Change resource"
"Virtual machine"."Change Configuration"."Change Settings"
"Virtual machine"."Change Configuration"."Upgrade virtual machine compatibility"
"Virtual machine".Interaction."Guest operating system management by VIX API"
"Virtual machine".Interaction."Power off"
"Virtual machine".Interaction."Power on"
"Virtual machine".Interaction.Reset
"Virtual machine"."Edit Inventory"."Create new"
"Virtual machine"."Edit Inventory"."Create from existing"
"Virtual machine"."Edit Inventory"."Remove"
"Virtual machine".Provisioning."Clone virtual machine"
"Virtual machine".Provisioning."Deploy template"
"Virtual machine".Provisioning."Mark as template"
Folder."Create folder"
Folder."Delete folder"

Additionally, the user requires some ReadOnly permissions, and some of the roles require permission to propogate the permissions to child objects. These settings vary depending on whether or not you install the cluster into an existing folder.

Required permissions and propagation settings
vSphere object When required Propagate to children Permissions required

vSphere vCenter

Always

False

Listed required privileges

vSphere vCenter Datacenter

Existing folder

False

ReadOnly permission

Installation program creates the folder

True

Listed required privileges

vSphere vCenter Cluster

Existing resource pool

False

ReadOnly permission

VMs in cluster root

True

Listed required privileges

vSphere vCenter Datastore

Always

False

Listed required privileges

vSphere Switch

Always

False

ReadOnly permission

vSphere Port Group

Always

False

Listed required privileges

vSphere vCenter Virtual Machine Folder

Existing folder

True

Listed required privileges

vSphere vCenter Resource Pool

Existing resource pool

True

Listed required privileges

For more information about creating an account with only the required privileges, see vSphere Permissions and User Management Tasks in the vSphere documentation.

Using OpenShift Container Platform with vMotion

If you intend on using vMotion in your vSphere environment, consider the following before installing a OpenShift Container Platform cluster.

  • OpenShift Container Platform generally supports compute-only vMotion. Using Storage vMotion can cause issues and is not supported.

    To help ensure the uptime of your compute and control plane nodes, it is recommended that you follow the VMware best practices for vMotion. It is also recommended to use VMware anti-affinity rules to improve the availability of OpenShift Container Platform during maintenance or hardware issues.

    For more information about vMotion and anti-affinity rules, see the VMware vSphere documentation for vMotion networking requirements and VM anti-affinity rules.

  • If you are using vSphere volumes in your pods, migrating a VM across datastores either manually or through Storage vMotion causes, invalid references within OpenShift Container Platform persistent volume (PV) objects. These references prevent affected pods from starting up and can result in data loss.

  • Similarly, OpenShift Container Platform does not support selective migration of VMDKs across datastores, using datastore clusters for VM provisioning or for dynamic or static provisioning of PVs, or using a datastore that is part of a datastore cluster for dynamic or static provisioning of PVs.

Cluster resources

When you deploy an OpenShift Container Platform cluster that uses installer-provisioned infrastructure, the installation program must be able to create several resources in your vCenter instance.

A standard OpenShift Container Platform installation creates the following vCenter resources:

  • 1 Folder

  • 1 Tag category

  • 1 Tag

  • Virtual machines:

    • 1 template

    • 1 temporary bootstrap node

    • 3 control plane nodes

    • 3 compute machines

Although these resources use 856 GB of storage, the bootstrap node is destroyed during the cluster installation process. A minimum of 800 GB of storage is required to use a standard cluster.

If you deploy more compute machines, the OpenShift Container Platform cluster will use more storage.

Cluster limits

Available resources vary between clusters. The number of possible clusters within a vCenter is limited primarily by available storage space and any limitations on the number of required resources. Be sure to consider both limitations to the vCenter resources that the cluster creates and the resources that you require to deploy a cluster, such as IP addresses and networks.

Networking requirements

You must use the Dynamic Host Configuration Protocol (DHCP) for the network and ensure that the DHCP server is configured to provide persistent IP addresses to the cluster machines. In the DHCP lease, you must configure the DHCP to use the default gateway. All nodes must be in the same VLAN. You cannot scale the cluster using a second VLAN as a Day 2 operation. Additionally, you must create the following networking resources before you install the OpenShift Container Platform cluster:

It is recommended that each OpenShift Container Platform node in the cluster must have access to a Network Time Protocol (NTP) server that is discoverable via DHCP. Installation is possible without an NTP server. However, asynchronous server clocks will cause errors, which NTP server prevents.

Required IP Addresses

An installer-provisioned vSphere installation requires two static IP addresses:

  • The API address is used to access the cluster API.

  • The Ingress address is used for cluster ingress traffic.

You must provide these IP addresses to the installation program when you install the OpenShift Container Platform cluster.

DNS records

You must create DNS records for two static IP addresses in the appropriate DNS server for the vCenter instance that hosts your OpenShift Container Platform cluster. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify when you install the cluster. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>..

Table 6. Required DNS records
Component Record Description

API VIP

api.<cluster_name>.<base_domain>.

This DNS A/AAAA or CNAME record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster.

Ingress VIP

*.apps.<cluster_name>.<base_domain>.

A wildcard DNS A/AAAA or CNAME record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster.

Generating a key pair for cluster node SSH access

During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication.

After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.

If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes.

Do not skip this procedure in production environments, where disaster recovery and debugging is required.

You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.

Procedure
  1. If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:

    $ ssh-keygen -t ed25519 -N '' -f <path>/<file_name> (1)
    1 Specify the path and file name, such as ~/.ssh/id_ed25519, of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory.

    If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm.

  2. View the public SSH key:

    $ cat <path>/<file_name>.pub

    For example, run the following to view the ~/.ssh/id_ed25519.pub public key:

    $ cat ~/.ssh/id_ed25519.pub
  3. Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command.

    On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically.

    1. If the ssh-agent process is not already running for your local user, start it as a background task:

      $ eval "$(ssh-agent -s)"
      Example output
      Agent pid 31874

      If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.

  4. Add your SSH private key to the ssh-agent:

    $ ssh-add <path>/<file_name> (1)
    1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519
    Example output
    Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
Next steps
  • When you install OpenShift Container Platform, provide the SSH public key to the installation program.

Obtaining the installation program

Before you install OpenShift Container Platform, download the installation file on a local computer.

Prerequisites
  • You have a computer that runs Linux or macOS, with 500 MB of local disk space

Procedure
  1. Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.

  2. Select your infrastructure provider.

  3. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files.

    The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.

    Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.

  4. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:

    $ tar -xvf openshift-install-linux.tar.gz
  5. Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.

Adding vCenter root CA certificates to your system trust

Because the installation program requires access to your vCenter’s API, you must add your vCenter’s trusted root CA certificates to your system trust before you install an OpenShift Container Platform cluster.

Procedure
  1. From the vCenter home page, download the vCenter’s root CA certificates. Click Download trusted root CA certificates in the vSphere Web Services SDK section. The <vCenter>/certs/download.zip file downloads.

  2. Extract the compressed file that contains the vCenter root CA certificates. The contents of the compressed file resemble the following file structure:

    certs
    ├── lin
    │   ├── 108f4d17.0
    │   ├── 108f4d17.r1
    │   ├── 7e757f6a.0
    │   ├── 8e4f8471.0
    │   └── 8e4f8471.r0
    ├── mac
    │   ├── 108f4d17.0
    │   ├── 108f4d17.r1
    │   ├── 7e757f6a.0
    │   ├── 8e4f8471.0
    │   └── 8e4f8471.r0
    └── win
        ├── 108f4d17.0.crt
        ├── 108f4d17.r1.crl
        ├── 7e757f6a.0.crt
        ├── 8e4f8471.0.crt
        └── 8e4f8471.r0.crl
    
    3 directories, 15 files
  3. Add the files for your operating system to the system trust. For example, on a Fedora operating system, run the following command:

    # cp certs/lin/* /etc/pki/ca-trust/source/anchors
  4. Update your system trust. For example, on a Fedora operating system, run the following command:

    # update-ca-trust extract

Creating the installation configuration file

You can customize the OpenShift Container Platform cluster you install on VMware vSphere.

Prerequisites
  • Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.

  • Obtain service principal permissions at the subscription level.

Procedure
  1. Create the install-config.yaml file.

    1. Change to the directory that contains the installation program and run the following command:

      $ ./openshift-install create install-config --dir <installation_directory> (1)
      1 For <installation_directory>, specify the directory name to store the files that the installation program creates.

      Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.

    2. At the prompts, provide the configuration details for your cloud:

      1. Optional: Select an SSH key to use to access your cluster machines.

        For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

      2. Select vsphere as the platform to target.

      3. Specify the name of your vCenter instance.

      4. Specify the user name and password for the vCenter account that has the required permissions to create the cluster.

        The installation program connects to your vCenter instance.

      5. Select the datacenter in your vCenter instance to connect to.

      6. Select the default vCenter datastore to use.

      7. Select the vCenter cluster to install the OpenShift Container Platform cluster in. The installation program uses the root resource pool of the vSphere cluster as the default resource pool.

      8. Select the network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured.

      9. Enter the virtual IP address that you configured for control plane API access.

      10. Enter the virtual IP address that you configured for cluster ingress.

      11. Enter the base domain. This base domain must be the same one that you used in the DNS records that you configured.

      12. Enter a descriptive name for your cluster. The cluster name you enter must match the cluster name you specified when configuring the DNS records.

      13. Paste the pull secret from the Red Hat OpenShift Cluster Manager.

  2. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section.

  3. Back up the install-config.yaml file so that you can use it to install multiple clusters.

    The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now.

Installation configuration parameters

Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform.

After installation, you cannot modify these parameters in the install-config.yaml file.

Required configuration parameters

Required installation configuration parameters are described in the following table:

Table 7. Required parameters
Parameter Description Values

apiVersion

The API version for the install-config.yaml content. The current version is v1. The installer may also support older API versions.

String

baseDomain

The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format.

A fully-qualified domain or subdomain name, such as example.com.

metadata

Kubernetes resource ObjectMeta, from which only the name parameter is consumed.

Object

metadata.name

The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}}.

String of lowercase letters and hyphens (-), such as dev.

platform

The configuration for the specific platform upon which to perform the installation: alibabacloud, aws, baremetal, azure, gcp, ibmcloud, openstack, ovirt, vsphere, or {}. For additional information about platform.<platform> parameters, consult the table for your specific platform that follows.

Object

pullSecret

Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io.

{
   "auths":{
      "cloud.openshift.com":{
         "auth":"b3Blb=",
         "email":"you@example.com"
      },
      "quay.io":{
         "auth":"b3Blb=",
         "email":"you@example.com"
      }
   }
}

Network configuration parameters

You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.

Only IPv4 addresses are supported.

Table 8. Network parameters
Parameter Description Values

networking

The configuration for the cluster network.

Object

You cannot modify parameters specified by the networking object after installation.

networking.networkType

The cluster network provider Container Network Interface (CNI) plugin to install.

Either OpenShiftSDN or OVNKubernetes. OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN.

networking.clusterNetwork

The IP address blocks for pods.

The default value is 10.128.0.0/14 with a host prefix of /23.

If you specify multiple IP address blocks, the blocks must not overlap.

An array of objects. For example:

networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23

networking.clusterNetwork.cidr

Required if you use networking.clusterNetwork. An IP address block.

An IPv4 network.

An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32.

networking.clusterNetwork.hostPrefix

The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr. A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses.

A subnet prefix.

The default value is 23.

networking.serviceNetwork

The IP address block for services. The default value is 172.30.0.0/16.

The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network.

An array with an IP address block in CIDR format. For example:

networking:
  serviceNetwork:
   - 172.30.0.0/16

networking.machineNetwork

The IP address blocks for machines.

If you specify multiple IP address blocks, the blocks must not overlap.

An array of objects. For example:

networking:
  machineNetwork:
  - cidr: 10.0.0.0/16

networking.machineNetwork.cidr

Required if you use networking.machineNetwork. An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24.

An IP network block in CIDR notation.

For example, 10.0.0.0/16.

Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in.

Optional configuration parameters

Optional installation configuration parameters are described in the following table:

Table 9. Optional parameters
Parameter Description Values

additionalTrustBundle

A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured.

String

cgroupsV2

Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OpenShift Container Platform process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time.

true

compute

The configuration for the machines that comprise the compute nodes.

Array of MachinePool objects.

compute.architecture

Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default).

String

compute.hyperthreading

Whether to enable or disable simultaneous multithreading, or hyperthreading, on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores.

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance.

Enabled or Disabled

compute.name

Required if you use compute. The name of the machine pool.

worker

compute.platform

Required if you use compute. Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value.

alibabacloud, aws, azure, gcp, ibmcloud, openstack, ovirt, vsphere, or {}

compute.replicas

The number of compute machines, which are also known as worker machines, to provision.

A positive integer greater than or equal to 2. The default value is 3.

controlPlane

The configuration for the machines that comprise the control plane.

Array of MachinePool objects.

controlPlane.architecture

Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default).

String

controlPlane.hyperthreading

Whether to enable or disable simultaneous multithreading, or hyperthreading, on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores.

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance.

Enabled or Disabled

controlPlane.name

Required if you use controlPlane. The name of the machine pool.

master

controlPlane.platform

Required if you use controlPlane. Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value.

alibabacloud, aws, azure, gcp, ibmcloud, openstack, ovirt, vsphere, or {}

controlPlane.replicas

The number of control plane machines to provision.

The only supported value is 3, which is the default value.

credentialsMode

The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported.

Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content.

If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint, Passthrough or Manual.

Mint, Passthrough, Manual or an empty string ("").

fips

Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.

To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode. The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture.

If you are using Azure File storage, you cannot enable FIPS mode.

false or true

imageContentSources

Sources and repositories for the release-image content.

Array of objects. Includes a source and, optionally, mirrors, as described in the following rows of this table.

imageContentSources.source

Required if you use imageContentSources. Specify the repository that users refer to, for example, in image pull specifications.

String

imageContentSources.mirrors

Specify one or more repositories that may also contain the same images.

Array of strings

publish

How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes.

Internal or External. The default value is External.

Setting this field to Internal is not supported on non-cloud platforms and IBM Cloud VPC.

If the value of the field is set to Internal, the cluster will become non-functional. For more information, refer to BZ#1953035.

sshKey

The SSH key or keys to authenticate access your cluster machines.

For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

One or more keys. For example:

sshKey:
  <key1>
  <key2>
  <key3>

Additional VMware vSphere configuration parameters

Additional VMware vSphere configuration parameters are described in the following table:

Table 10. Additional VMware vSphere cluster parameters
Parameter Description Values

platform.vsphere.vCenter

The fully-qualified hostname or IP address of the vCenter server.

String

platform.vsphere.username

The user name to use to connect to the vCenter instance with. This user must have at least the roles and privileges that are required for static or dynamic persistent volume provisioning in vSphere.

String

platform.vsphere.password

The password for the vCenter user name.

String

platform.vsphere.datacenter

The name of the datacenter to use in the vCenter instance.

String

platform.vsphere.defaultDatastore

The name of the default datastore to use for provisioning volumes.

String

platform.vsphere.folder

Optional. The absolute path of an existing folder where the installation program creates the virtual machines. If you do not provide this value, the installation program creates a folder that is named with the infrastructure ID in the datacenter virtual machine folder.

String, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name>.

platform.vsphere.resourcePool

Optional. The absolute path of an existing resource pool where the installer creates the virtual machines. If you do not specify a value, resources are installed in the root of the cluster /<datacenter_name>/host/<cluster_name>/Resources.

String, for example, /<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>/<optional_nested_resource_pool_name>.

platform.vsphere.network

The network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured.

String

platform.vsphere.cluster

The vCenter cluster to install the OpenShift Container Platform cluster in.

String

platform.vsphere.apiVIP

The virtual IP (VIP) address that you configured for control plane API access.

An IP address, for example 128.0.0.1.

platform.vsphere.ingressVIP

The virtual IP (VIP) address that you configured for cluster ingress.

An IP address, for example 128.0.0.1.

platform.vsphere.diskType

Optional. The disk provisioning method. This value defaults to the vSphere default storage policy if not set.

Valid values are thin, thick, or eagerZeroedThick.

Optional VMware vSphere machine pool configuration parameters

Optional VMware vSphere machine pool configuration parameters are described in the following table:

Table 11. Optional VMware vSphere machine pool parameters
Parameter Description Values

platform.vsphere.clusterOSImage

The location from which the installer downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network.

An HTTP or HTTPS URL, optionally with a SHA-256 checksum. For example, https://mirror.openshift.com/images/rhcos-<version>-vmware.<architecture>.ova.

platform.vsphere.osDisk.diskSizeGB

The size of the disk in gigabytes.

Integer

platform.vsphere.cpus

The total number of virtual processor cores to assign a virtual machine. The value of platform.vsphere.cpus must be a multiple of platform.vsphere.coresPerSocket value.

Integer

platform.vsphere.coresPerSocket

The number of cores per socket in a virtual machine. The number of virtual sockets on the virtual machine is platform.vsphere.cpus/platform.vsphere.coresPerSocket. The default value for control plane nodes and worker nodes is 4 and 2, respectively.

Integer

platform.vsphere.memoryMB

The size of a virtual machine’s memory in megabytes.

Integer

Sample install-config.yaml file for an installer-provisioned VMware vSphere cluster

You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.

apiVersion: v1
baseDomain: example.com (1)
compute: (2)
- hyperthreading: Enabled (3)
  name: worker
  replicas: 3
  platform:
    vsphere: (4)
      cpus: 2
      coresPerSocket: 2
      memoryMB: 8192
      osDisk:
        diskSizeGB: 120
controlPlane: (2)
  hyperthreading: Enabled (3)
  name: master
  replicas: 3
  platform:
    vsphere: (4)
      cpus: 4
      coresPerSocket: 2
      memoryMB: 16384
      osDisk:
        diskSizeGB: 120
metadata:
  name: cluster (5)
platform:
  vsphere:
    vcenter: your.vcenter.server
    username: username
    password: password
    datacenter: datacenter
    defaultDatastore: datastore
    folder: folder
    resourcePool: resource_pool (6)
    diskType: thin (7)
    network: VM_Network
    cluster: vsphere_cluster_name (8)
    apiVIP: api_vip
    ingressVIP: ingress_vip
fips: false
pullSecret: '{"auths": ...}'
sshKey: 'ssh-ed25519 AAAA...'
1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name.
2 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, -, and the first line of the controlPlane section must not. Only one control plane pool is used.
3 Whether to enable or disable simultaneous multithreading, or hyperthreading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Your machines must use at least 8 CPUs and 32 GB of RAM if you disable simultaneous multithreading.

4 Optional: Provide additional configuration for the machine pool parameters for the compute and control plane machines.
5 The cluster name that you specified in your DNS records.
6 Optional: Provide an existing resource pool for machine creation. If you do not specify a value, the installation program uses the root resource pool of the vSphere cluster.
7 The vSphere disk provisioning method.
8 The vSphere cluster to install the OpenShift Container Platform cluster in.

Configuring the cluster-wide proxy during installation

Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file.

Prerequisites
  • You have an existing install-config.yaml file.

  • You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object’s spec.noProxy field to bypass the proxy if necessary.

    The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr, networking.clusterNetwork[].cidr, and networking.serviceNetwork[] fields from your installation configuration.

    For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint (169.254.169.254).

Procedure
  1. Edit your install-config.yaml file and add the proxy settings. For example:

    apiVersion: v1
    baseDomain: my.domain.com
    proxy:
      httpProxy: http://<username>:<pswd>@<ip>:<port> (1)
      httpsProxy: https://<username>:<pswd>@<ip>:<port> (2)
      noProxy: example.com (3)
    additionalTrustBundle: | (4)
        -----BEGIN CERTIFICATE-----
        <MY_TRUSTED_CA_CERT>
        -----END CERTIFICATE-----
    ...
    1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http.
    2 A proxy URL to use for creating HTTPS connections outside the cluster.
    3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com, but not y.com. Use * to bypass the proxy for all destinations. You must include vCenter’s IP address and the IP range that you use for its machines.
    4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.

    The installation program does not support the proxy readinessEndpoints field.

    If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example:

    $ ./openshift-install wait-for install-complete --log-level debug
  2. Save the file and reference it when installing OpenShift Container Platform.

The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec.

Only the Proxy object named cluster is supported, and no additional proxies can be created.

Deploying the cluster

You can install OpenShift Container Platform on a compatible cloud platform.

You can run the create cluster command of the installation program only once, during initial installation.

Prerequisites
  • Configure an account with the cloud platform that hosts your cluster.

  • Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.

Procedure
  1. Change to the directory that contains the installation program and initialize the cluster deployment:

    $ ./openshift-install create cluster --dir <installation_directory> \ (1)
        --log-level=info (2)
    
    1 For <installation_directory>, specify the location of your customized ./install-config.yaml file.
    2 To view different installation details, specify warn, debug, or error instead of info.

    Use the openshift-install command from the bastion hosted in the VMC environment.

    If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.

    When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal.

    Example output
    ...
    INFO Install complete!
    INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
    INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
    INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL"
    INFO Time elapsed: 36m22s

    The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds.

    • The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information.

    • It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.

    You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.

Installing the OpenShift CLI by downloading the binary

You can install the OpenShift CLI (oc) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS.

If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform 4.10. Download and install the new version of oc.

Installing the OpenShift CLI on Linux

You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.

Procedure
  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.

  2. Select the appropriate version in the Version drop-down menu.

  3. Click Download Now next to the OpenShift v4.10 Linux Client entry and save the file.

  4. Unpack the archive:

    $ tar xvf <file>
  5. Place the oc binary in a directory that is on your PATH.

    To check your PATH, execute the following command:

    $ echo $PATH

After you install the OpenShift CLI, it is available using the oc command:

$ oc <command>

Installing the OpenShift CLI on Windows

You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.

Procedure
  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.

  2. Select the appropriate version in the Version drop-down menu.

  3. Click Download Now next to the OpenShift v4.10 Windows Client entry and save the file.

  4. Unzip the archive with a ZIP program.

  5. Move the oc binary to a directory that is on your PATH.

    To check your PATH, open the command prompt and execute the following command:

    C:\> path

After you install the OpenShift CLI, it is available using the oc command:

C:\> oc <command>

Installing the OpenShift CLI on macOS

You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.

Procedure
  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.

  2. Select the appropriate version in the Version drop-down menu.

  3. Click Download Now next to the OpenShift v4.10 MacOSX Client entry and save the file.

  4. Unpack and unzip the archive.

  5. Move the oc binary to a directory on your PATH.

    To check your PATH, open a terminal and execute the following command:

    $ echo $PATH

After you install the OpenShift CLI, it is available using the oc command:

$ oc <command>

Logging in to the cluster by using the CLI

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.

Prerequisites
  • You deployed an OpenShift Container Platform cluster.

  • You installed the oc CLI.

Procedure
  1. Export the kubeadmin credentials:

    $ export KUBECONFIG=<installation_directory>/auth/kubeconfig (1)
    1 For <installation_directory>, specify the path to the directory that you stored the installation files in.
  2. Verify you can run oc commands successfully using the exported configuration:

    $ oc whoami
    Example output
    system:admin

Creating registry storage

After you install the cluster, you must create storage for the Registry Operator.

Image registry removed during installation

On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed. This allows openshift-installer to complete installations on these platform types.

After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed.

Image registry storage configuration

The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available.

Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters.

Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades.

Configuring registry storage for VMware vSphere

As a cluster administrator, following installation you must configure your registry to use storage.

Prerequisites
  • Cluster administrator permissions.

  • A cluster on VMware vSphere.

  • Persistent storage provisioned for your cluster, such as Red Hat OpenShift Data Foundation.

    OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required.

  • Must have "100Gi" capacity.

Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended.

Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components.

Procedure
  1. To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource.

    When using shared storage, review your security settings to prevent outside access.

  2. Verify that you do not have a registry pod:

    $ oc get pod -n openshift-image-registry -l docker-registry=default
    Example output
    No resourses found in openshift-image-registry namespace

    If you do have a registry pod in your output, you do not need to continue with this procedure.

  3. Check the registry configuration:

    $ oc edit configs.imageregistry.operator.openshift.io
    Example output
    storage:
      pvc:
        claim: (1)
    1 Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when replicating to more than one replica.
  4. Check the clusteroperator status:

    $ oc get clusteroperator image-registry
    Example output
    NAME             VERSION                              AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
    image-registry   4.7                                  True        False         False      6h50m

Configuring block registry storage for VMware vSphere

To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy.

Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica.

Procedure
  1. To set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy and runs with only 1 replica:

    $ oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}'
  2. Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode.

    1. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object:

      kind: PersistentVolumeClaim
      apiVersion: v1
      metadata:
        name: image-registry-storage (1)
        namespace: openshift-image-registry (2)
      spec:
        accessModes:
        - ReadWriteOnce (3)
        resources:
          requests:
            storage: 100Gi (4)
      1 A unique name that represents the PersistentVolumeClaim object.
      2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry.
      3 The access mode of the persistent volume claim. With ReadWriteOnce, the volume can be mounted with read and write permissions by a single node.
      4 The size of the persistent volume claim.
    2. Create the PersistentVolumeClaim object from the file:

      $ oc create -f pvc.yaml -n openshift-image-registry
  3. Edit the registry configuration so that it references the correct PVC:

    $ oc edit config.imageregistry.operator.openshift.io -o yaml
    Example output
    storage:
      pvc:
        claim: (1)
    1 Creating a custom PVC allows you to leave the claim field blank for the default automatic creation of an image-registry-storage PVC.

For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere.

Backing up VMware vSphere volumes

OpenShift Container Platform provisions new volumes as independent persistent disks to freely attach and detach the volume on any node in the cluster. As a consequence, it is not possible to back up volumes that use snapshots, or to restore volumes from snapshots. See Snapshot Limitations for more information.

Procedure

To create a backup of persistent volumes:

  1. Stop the application that is using the persistent volume.

  2. Clone the persistent volume.

  3. Restart the application.

  4. Create a backup of the cloned volume.

  5. Delete the cloned volume.

Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 4.10, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.

After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.

Additional resources

Next steps