×

In OpenShift Container Platform 4.15, you can use the Agent-based Installer to install a cluster on Oracle® Cloud Infrastructure (OCI), so that you can run cluster workloads on infrastructure that supports dedicated, hybrid, public, and multiple cloud environments.

The Agent-based Installer and OCI overview

You can install an OpenShift Container Platform cluster on Oracle® Cloud Infrastructure (OCI) by using the Agent-based Installer. Both Red Hat and Oracle test, validate, and support running OCI and Oracle® Cloud VMware Solution (OCVS) workloads in an OpenShift Container Platform cluster on OCI.

Using the Agent-based Installer to install an OpenShift Container Platform cluster on OCI that is configured with a virtual machine (VM) compute instance is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

The Agent-based installer provides the ease of use of the Assisted Installation service, but with the capability to install a cluster in either a connected or disconnected environment.

OCI provides services that can meet your regulatory compliance, performance, and cost-effectiveness needs. OCI supports 64-bit x86 instances and 64-bit ARM instances. Additionally, OCI provides an OCVS service where you can move VMware workloads to OCI with minimal application re-architecture.

Consider selecting a nonvolatile memory express (NVMe) drive or a solid-state drive (SSD) for your boot disk, because these drives offer low latency and high throughput capabilities for your boot disk.

By running your OpenShift Container Platform cluster on OCI, you can access the following capabilities:

  • Compute flexible shapes, where you can customize the number of Oracle® CPUs (OCPUs) and memory resources for your VM. With access to this capability, a cluster’s workload can perform operations in a resource-balanced environment. You can find all RHEL-certified OCI shapes by going to the Oracle page on the Red Hat Ecosystem Catalog portal.

  • Block Volume storage, where you can configure scaling and auto-tuning settings for your storage volume, so that the Block Volume service automatically adjusts the performance level to optimize performance.

  • OCVS, where you can deploy a cluster in a public-cloud environment that operates on a VMware® vSphere software-defined data center (SDDC). You continue to retain full-administrative control over your VMware vSphere environment, but you can use OCI services to improve your applications on flexible, scalable, and secure infrastructure.

To ensure the best performance conditions for your cluster workloads that operate on OCI and on the OCVS service, ensure volume performance units (VPUs) for your block volume is sized for your workloads. The following list provides some guidance in selecting the VPUs needed for specific performance needs:

  • Test or proof of concept environment: 100 GB, and 20 to 30 VPUs.

  • Basic environment: 500 GB, and 60 VPUs.

  • Heavy production environment: More than 500 GB, and 100 or more VPUs.

Consider reserving additional VPUs to provide sufficient capacity for updates and scaling activities. For more information about VPUs, see Volume Performance Units in the Oracle documentation.

Creating OCI infrastructure resources and services

Before you install OpenShift Container Platform on Oracle® Cloud Infrastructure (OCI), you must create an OCI environment on your virtual machine (VM) shape. By creating this environment, you can install OpenShift Container Platform and deploy a cluster on infrastructure that supports a wide range of cloud options and strong security policies.

Prerequisites
  • You have prior knowledge of OCI components. See Learn About Oracle Cloud Basics in the Oracle documentation.

  • Your organization signed up for an Oracle account and Identity Domain. This step is required so that you can access an administrator account, which is the initial cloud-identity and access management (IAM) user for your organization. See The administrators group and policy section in the Oracle documentation.

  • You have logged in to your organization’s OCI account with administrator privileges.

Procedure
  1. Create a compartment and ensure you defined your Oracle® Cloud Identifier (OCID) in the compartment. A compartment is a component where you can organize and isolate your cloud resources. After you create a compartment, Oracle automatically assigns an OCID to your organization’s account. An administrator can access all compartments tagged to your organization’s OCI account.

  2. Create a virtual cloud network (VCN). A compute instance, load balancer, and other resources need this network infrastructure to connect to each other over an internet connection. To establish an on-premise network you must manually create subnets, gateways, routing rules, and security policies. Ensure that you complete the following steps:

    1. In Primary VNIC IP addressesPrimary network, select a VCN, such as oci-cluster-vcn.

    2. From the Subnet section, select your subnet, such as ici-cluster-private-subnet.

    3. For public IPv4 subnets, ensure that you select the Do not assign a public IPv4 address checkbox.

  3. Create a network security group (NSG) in your VCN. You can use the NSG to establish advanced security rules for your network. You must locate the NSG in your compartment, so that certain groups can access network resources. Ensure that you complete the following steps:

    1. Click Show advanced options.

    2. Select the Use network security groups to control traffic checkbox.

    3. Set your NSG, such as oci-cluster-controlplane-nsg.

  4. Create a dynamic group that hosts compute instances. After you create the dynamic group, you can then create a policy statement that defines rules for your cluster environment. This statement sets the precedent for each compute instance to join your OpenShift Container Platform cluster as a self-managed node.

  5. Create a policy statement. You must create a policy so that your administrator can grant access to your groups, users, or resources that operate in your network.

  6. Create a load balancer, so that you can provide automated traffic distribution on your VCN.

  7. Create three Domain Name System (DNS) records and then add the records to a DNS, so that Oracle’s edge-network can maintain your domain’s DNS queries.

    To ensure compatibility with OpenShift Container Platform, set A as the record type for each DNS record and name records as follows:

    • api.<cluster_name>.<base_domain>, which targets the apiVIP parameter of the API load balancer.

    • api-int.<cluster_name>.<base_domain>, which targets the apiVIP parameter of the API load balancer.

    • *.apps.<cluster_name>.<base_domain>, which targets the ingressVIP parameter of the Ingress load balancer.

    The api.* and api-int.* DNS records relate to control plane machines, so you must ensure that all nodes in your installed OpenShift Container Platform cluster can access these DNS records.

Creating configuration files for installing a cluster on OCI

You need to create the install-config.yaml and the agent-config.yaml configuration files so that you can use the Agent-based Installer to generate a bootable ISO image. The Agent-based installation comprises a bootable ISO that has the Assisted discovery agent and the Assisted Service. Both of these components are required to perform the cluster installation, but the latter component runs on only one of the hosts.

In a later procedure, you can upload your generated agent ISO image to Oracle’s default Object Storage bucket, which is the initial step for integrating your OpenShift Container Platform cluster on Oracle® Cloud Infrastructure (OCI).

You can also use the Agent-based Installer to generate or accept Zero Touch Provisioning (ZTP) custom resources.

Prerequisites
  • You reviewed details about the OpenShift Container Platform installation and update processes.

  • You read the documentation on selecting a cluster installation method and preparing it for users.

  • You have read the "Preparing to install with the Agent-based Installer" documentation.

  • You downloaded the Agent-Based Installer and the command-line interface (CLI) from the Red Hat Hybrid Cloud Console.

  • You have logged in to the OpenShift Container Platform with administrator privileges.

Procedure
  1. For a disconnected environment, mirror the Mirror registry for Red Hat OpenShift to your local container image registry.

    Check that your openshift-install binary version relates to your local image container registry and not a shared registry, such as Red Hat Quay.

    $ ./openshift-install version
    Example output for a shared registry binary
    ./openshift-install 4.15.0
    built from commit ae7977b7d1ca908674a0d45c5c243c766fa4b2ca
    release image registry.ci.openshift.org/origin/release:4.15ocp-release@sha256:0da6316466d60a3a4535d5fed3589feb0391989982fba59d47d4c729912d6363
    release architecture amd64
  2. Configure the install-config.yaml configuration file to meet the needs of your organization.

    Example install-config.yaml configuration file that demonstrates setting an external platform
    # install-config.yaml
    apiVersion: v1
    baseDomain: <base_domain> (1)
    networking:
      clusterNetwork:
      - cidr: 10.128.0.0/14
        hostPrefix: 23
      network type: OVNKubernetes
      machineNetwork:
      - cidr: <ip_address_from_cidr> (2)
      serviceNetwork:
      - 172.30.0.0/16
    compute:
      - architecture: amd64 (3)
      hyperthreading: Enabled
      name: worker
      replicas: 0
    controlPlane:
      architecture: amd64 (3)
      hyperthreading: Enabled
      name: master
      replicas: 3
    platform:
       external:
        platformName: oci (4)
        cloudControllerManager: External
    sshKey: <public_ssh_key> (5)
    pullSecret: '<pull_secret>' (6)
    # ...
    1 The base domain of your cloud provider.
    2 The IP address from the virtual cloud network (VCN) that the CIDR allocates to resources and components that operate on your network.
    3 Depending on your infrastructure, you can select either x86_64, or amd64.
    4 Set OCI as the external platform, so that OpenShift Container Platform can integrate with OCI.
    5 Specify your SSH public key.
    6 The pull secret that you need for authenticate purposes when downloading container images for OpenShift Container Platform components and services, such as Quay.io. See Install OpenShift Container Platform 4 from the Red Hat Hybrid Cloud Console.
  3. Create a directory on your local system named openshift.

    Do not move the install-config.yaml and agent-config.yaml configuration files to the openshift directory.

  4. From the oracle-quickstart / oci-openshift GitHub web page, select the <> Code button and click Download ZIP. Save the archive file to your openshift directory, so that all the Oracle Cloud Controller Manager (CCM) and Oracle Container Storage Interface (CSI) manifests exist in the same directory. The downloaded archive file includes files for creating cluster resources and custom manifests.

  5. Go to the custom_manifests web page on GitHub to access the custom manifest files.

    The Oracle CCM manifest are required for deploying the Oracle CCM during cluster installation so that OpenShift Container Platform can connect to the external OCI platform. The Oracle CSI custom manifests are required for deploying the Oracle CSI driver during cluster installation so that OpenShift Container Platform can claim required objects from OCI.

    You must change the secret oci-cloud-controller-manager defined in the oci-ccm.yml configuration file to match your organization’s region, compartment OCID, VCN OCID, and the subnet OCID from the load balancer.

  6. Use the Agent-based Installer to generate a minimal ISO image, which excludes the rootfs image, by entering the following command in your OpenShift Container Platform CLI. You can use this image later in the process to boot all your cluster’s nodes.

    $ ./openshift-install agent create image --log-level debug

    The command also completes the following actions:

    • Creates a subdirectory, ./<installation_directory>/auth directory:, and places kubeadmin-password and kubeconfig files in the subdirectory.

    • Creates a rendezvousIP file based on the IP address that you specified in the agent-config.yaml configuration file.

    • Optional: Any modifications you made to agent-config.yaml and install-config.yaml configuration files get imported to the Zero Touch Provisioning (ZTP) custom resources.

      The Agent-based Installer uses Red Hat Enterprise Linux CoreOS (RHCOS). The rootfs image, which is mentioned in a later listed item, is required for booting, recovering, and repairing your operating system.

  7. Configure the agent-config.yaml configuration file to meet your organization’s requirements.

    Example agent-config.yaml configuration file that sets values for an IPv4 formatted network.
    apiVersion: v1alpha1
    metadata:
      name: <cluster_name> (1)
      namespace: <cluster_namespace> (2)
    rendezvousIP: <ip_address_from_CIDR> (3)
    bootArtifactsBaseURL: <server_URL> (4)
    # ...
    1 The cluster name that you specified in your DNS record.
    2 The namespace of your cluster on OpenShift Container Platform.
    3 If you use IPv4 as the network IP address format, ensure that you set the rendezvousIP parameter to an IPv4 address that the VCN’s Classless Inter-Domain Routing (CIDR) method allocates on your network. Also ensure that at least one instance from the pool of instances that you booted with the ISO matches the IP address value you set for rendezvousIP.
    4 The URL of the server where you want to upload the rootfs image.
  8. Apply one of the following two updates to your agent-config.yaml configuration file:

    • For a disconnected network: After you run the command to generate a minimal ISO Image, the Agent-based installer saves the rootfs image into the ./<installation_directory>/boot-artifacts directory on your local system. Use your preferred web server, such as any Hypertext Transfer Protocol daemon (httpd), to upload rootfs to the location stated in the bootArtifactsBaseURL parameter in the agent-config.yaml configuration file.

      For example, if the bootArtifactsBaseURL parameter states http://192.168.122.20, you would upload the generated rootfs image to this location, so that the Agent-based installer can access the image from http://192.168.122.20/agent.x86_64-rootfs.img. After the Agent-based installer boots the minimal ISO for the external platform, the Agent-based Installer downloads the rootfs image from the http://192.168.122.20/agent.x86_64-rootfs.img location into the system memory.

      The Agent-based Installer also adds the value of the bootArtifactsBaseURL to the minimal ISO Image’s configuration, so that when the Operator boots a cluster’s node, the Agent-based Installer downloads the rootfs image into system memory.

    • For a connected network: You do not need to specify the bootArtifactsBaseURL parameter in the agent-config.yaml configuration file. The default behavior of the Agent-based Installer reads the rootfs URL location from https://rhcos.mirror.openshift.com. After the Agent-based Installer boots the minimal ISO for the external platform, the Agent-based Installer then downloads the rootfs file into your system’s memory from the default RHCOS URL.

      Consider that the full ISO image, which is in excess of 1 GB, includes the rootfs image. The image is larger than the minimal ISO Image, which is typically less than 150 MB.

Configuring your firewall for OpenShift Container Platform

Before you install OpenShift Container Platform, you must configure your firewall to grant access to the sites that OpenShift Container Platform requires. When using a firewall, make additional configurations to the firewall so that OpenShift Container Platform can access the sites that it requires to function.

For a disconnected environment, you must mirror content from both Red Hat and Oracle. This environment requires that you create firewall rules to expose your firewall to specific ports and registries.

If your environment has a dedicated load balancer in front of your OpenShift Container Platform cluster, review the allowlists between your firewall and load balancer to prevent unwanted network restrictions to your cluster.

Procedure
  1. Set the following registry URLs for your firewall’s allowlist:

    URL Port Function

    registry.redhat.io

    443

    Provides core container images

    access.redhat.com [1]

    443

    Hosts all the container images that are stored on the Red Hat Ecosytem Catalog, including core container images.

    quay.io

    443

    Provides core container images

    cdn.quay.io

    443

    Provides core container images

    cdn01.quay.io

    443

    Provides core container images

    cdn02.quay.io

    443

    Provides core container images

    cdn03.quay.io

    443

    Provides core container images

    sso.redhat.com

    443

    The https://console.redhat.com site uses authentication from sso.redhat.com

    1. In a firewall environment, ensure that the access.redhat.com resource is on the allowlist. This resource hosts a signature store that a container client requires for verifying images when pulling them from registry.access.redhat.com.

    You can use the wildcards *.quay.io and *.openshiftapps.com instead of cdn.quay.io and cdn0[1-3].quay.io in your allowlist. When you add a site, such as quay.io, to your allowlist, do not add a wildcard entry, such as *.quay.io, to your denylist. In most cases, image registries use a content delivery network (CDN) to serve images. If a firewall blocks access, image downloads are denied when the initial download request redirects to a hostname such as cdn01.quay.io.

  2. Set your firewall’s allowlist to include any site that provides resources for a language or framework that your builds require.

  3. If you do not disable Telemetry, you must grant access to the following URLs to access Red Hat Insights:

    URL Port Function

    cert-api.access.redhat.com

    443

    Required for Telemetry

    api.access.redhat.com

    443

    Required for Telemetry

    infogw.api.openshift.com

    443

    Required for Telemetry

    console.redhat.com

    443

    Required for Telemetry and for insights-operator

  4. Set your firewall’s allowlist to include the following registry URLs:

    URL Port Function

    api.openshift.com

    443

    Required both for your cluster token and to check if updates are available for the cluster.

    rhcos.mirror.openshift.com

    443

    Required to download Red Hat Enterprise Linux CoreOS (RHCOS) images.

  5. Set your firewall’s allowlist to include the following external URLs. Each repository URL hosts OCI containers. Consider mirroring images to as few repositories as possible to reduce any performance issues.

    URL Port Function

    k8s.gcr.io

    port

    A Kubernetes registry that hosts container images for a community-based image registry. This image registry is hosted on a custom Google Container Registry (GCR) domain.

    ghcr.io

    port

    A GitHub image registry where you can store and manage Open Container Initiative images. Requires an access token to publish, install, and delete private, internal, and public packages.

    storage.googleapis.com

    443

    A source of release image signatures, although the Cluster Version Operator needs only a single functioning source.

    registry.k8s.io

    port

    Replaces the k8s.gcr.io image registry because the k8s.gcr.io image registry does not support other platforms and vendors.

Running a cluster on OCI

To run a cluster on Oracle® Cloud Infrastructure (OCI), you must upload the generated agent ISO image to the default Object Storage bucket on OCI. Additionally, you must create a compute instance from the supplied base image, so that your OpenShift Container Platform and OCI can communicate with each other for the purposes of running the cluster on OCI.

Prerequisites
  • You generated an agent ISO image. See the "Creating configuration files for installing a cluster on OCI" section.

Procedure
  1. Upload the agent ISO image to Oracle’s default Object Storage bucket and then import the agent ISO image as a custom image to this bucket. You must then configure the custom image to boot in Unified Extensible Firmware Interface (UEFI) mode. See Creating a custom image and Using the Console in Oracle’s documentation.

    For example, from ComputeCustom images, import the agent ISO image to the bucket, and enter values in the following fields:

    • Name: oci-cluster

    • Bucket: Select the bucket that contains the agent ISO image

    • Object name: Select the name of the agent ISO

    • Image type: QCOW2

  2. After the image imports, go to the Edit image capabilities setting and ensure that only UEFI_64 is selected for the Firmware field.

  3. For instructions on creating a compute instance from the supplied base image for your cluster topology, see Creating an instance in the Oracle documentation. The following OpenShift Container Platform cluster topologies are supported on OCI:

    • Installing an OpenShift Container Platform cluster on a single node.

    • A highly-availabile cluster that has a minimum of three control plane instances and two compute instances.

    • A compact three-node cluster that has a minimum of three control plane instances.

      Before you create the compute instance, check that you have enough memory and disk resources for your cluster. Additionally, ensure that at least one compute instance has the same IP address as the address stated under rendezvousIP in the agent-config.yaml file.

      The following example lists important settings for an instance named oci-cluster-master.

    • Go to Image and shape sectionImageMy images and then select your custom image.

    • Go to Image and shape sectionShape menu and then select at least 4 CPUs and 16 GB of memory.

    • From the Boot volume section, select the Specify a custom boot volume size checkbox. Enter a value that is at least 100 GB for the boot volume size. Assign the number of volume performance units (VPUs) for your organization needs, such as a value in the range of 20 to 30 VPUs.

Verifying that your Agent-based cluster installation runs on OCI

Verify that your cluster was installed and is running effectively on Oracle® Cloud Infrastructure (OCI).

Prerequisites
  • You created all the required OCI resources and services. See the "Creating OCI infrastructure resources and services" section.

  • You created install-config.yaml and agent-config.yaml configuration files. See the "Creating configuration files for installing a cluster on OCI" section.

  • You uploaded the agent ISO image to Oracle’s default Object Storage bucket, and you created a compute instance on OCI. For more information, see "Running a cluster on OCI".

Procedure

After you deploy the compute instance on a self-managed node in your OpenShift Container Platform cluster, you can monitor the cluster’s status by choosing one of the following options:

  • From the OpenShift Container Platform CLI, enter the following command:

    $ ./openshift-install agent wait-for install-complete --log-level debug

    Check the status of the rendezvous host node that runs the bootstrap node. After the host reboots, the host forms part of the cluster.

  • Use the kubeconfig API to check the status of various OpenShift Container Platform components. For the KUBECONFIG environment variable, set the relative path of the cluster’s kubeconfig configuration file:

    $  export KUBECONFIG=~/auth/kubeconfig

    Check the status of each of the cluster’s self-managed nodes. CCM applies a label to each node to designate the node as running in a cluster on OCI.

    $ oc get nodes -A
    Output example
    NAME                                   STATUS ROLES                 AGE VERSION
    main-0.private.agenttest.oraclevcn.com Ready  control-plane, master 7m  v1.27.4+6eeca63
    main-1.private.agenttest.oraclevcn.com Ready  control-plane, master 15m v1.27.4+d7fa83f
    main-2.private.agenttest.oraclevcn.com Ready  control-plane, master 15m v1.27.4+d7fa83f

    Check the status of each of the cluster’s Operators, with the CCM Operator status being a good indicator that your cluster is running.

    $ oc get co
    Truncated output example
    NAME           VERSION     AVAILABLE  PROGRESSING    DEGRADED   SINCE   MESSAGE
    authentication 4.15.0-0    True       False          False      6m18s
    baremetal      4.15.0-0    True       False          False      2m42s
    network        4.15.0-0    True       True           False      5m58s  Progressing: …
        …