×

Configuring NTP for disconnected clusters

OpenShift Container Platform installs the chrony Network Time Protocol (NTP) service on the cluster nodes. Use the following procedure to configure NTP servers on the control plane nodes and configure compute nodes as NTP clients of the control plane nodes after a successful deployment.

Configuring NTP for disconnected clusters

OpenShift Container Platform nodes must agree on a date and time to run properly. When compute nodes retrieve the date and time from the NTP servers on the control plane nodes, it enables the installation and operation of clusters that are not connected to a routable network and thereby do not have access to a higher stratum NTP server.

Procedure
  1. Install Butane on your installation host by using the following command:

    $ sudo dnf -y install butane
  2. Create a Butane config, 99-master-chrony-conf-override.bu, including the contents of the chrony.conf file for the control plane nodes.

    See "Creating machine configs with Butane" for information about Butane.

    Butane config example
    variant: openshift
    version: 4.17.0
    metadata:
      name: 99-master-chrony-conf-override
      labels:
        machineconfiguration.openshift.io/role: master
    storage:
      files:
        - path: /etc/chrony.conf
          mode: 0644
          overwrite: true
          contents:
            inline: |
              # Use public servers from the pool.ntp.org project.
              # Please consider joining the pool (https://www.pool.ntp.org/join.html).
    
              # The Machine Config Operator manages this file
              server openshift-master-0.<cluster-name>.<domain> iburst (1)
              server openshift-master-1.<cluster-name>.<domain> iburst
              server openshift-master-2.<cluster-name>.<domain> iburst
    
              stratumweight 0
              driftfile /var/lib/chrony/drift
              rtcsync
              makestep 10 3
              bindcmdaddress 127.0.0.1
              bindcmdaddress ::1
              keyfile /etc/chrony.keys
              commandkey 1
              generatecommandkey
              noclientlog
              logchange 0.5
              logdir /var/log/chrony
    
              # Configure the control plane nodes to serve as local NTP servers
              # for all compute nodes, even if they are not in sync with an
              # upstream NTP server.
    
              # Allow NTP client access from the local network.
              allow all
              # Serve time even if not synchronized to a time source.
              local stratum 3 orphan
    1 You must replace <cluster-name> with the name of the cluster and replace <domain> with the fully qualified domain name.
  3. Use Butane to generate a MachineConfig object file, 99-master-chrony-conf-override.yaml, containing the configuration to be delivered to the control plane nodes:

    $ butane 99-master-chrony-conf-override.bu -o 99-master-chrony-conf-override.yaml
  4. Create a Butane config, 99-worker-chrony-conf-override.bu, including the contents of the chrony.conf file for the compute nodes that references the NTP servers on the control plane nodes.

    Butane config example
    variant: openshift
    version: 4.17.0
    metadata:
      name: 99-worker-chrony-conf-override
      labels:
        machineconfiguration.openshift.io/role: worker
    storage:
      files:
        - path: /etc/chrony.conf
          mode: 0644
          overwrite: true
          contents:
            inline: |
              # The Machine Config Operator manages this file.
              server openshift-master-0.<cluster-name>.<domain> iburst (1)
              server openshift-master-1.<cluster-name>.<domain> iburst
              server openshift-master-2.<cluster-name>.<domain> iburst
    
              stratumweight 0
              driftfile /var/lib/chrony/drift
              rtcsync
              makestep 10 3
              bindcmdaddress 127.0.0.1
              bindcmdaddress ::1
              keyfile /etc/chrony.keys
              commandkey 1
              generatecommandkey
              noclientlog
              logchange 0.5
              logdir /var/log/chrony
    1 You must replace <cluster-name> with the name of the cluster and replace <domain> with the fully qualified domain name.
  5. Use Butane to generate a MachineConfig object file, 99-worker-chrony-conf-override.yaml, containing the configuration to be delivered to the worker nodes:

    $ butane 99-worker-chrony-conf-override.bu -o 99-worker-chrony-conf-override.yaml
  6. Apply the 99-master-chrony-conf-override.yaml policy to the control plane nodes.

    $ oc apply -f 99-master-chrony-conf-override.yaml
    Example output
    machineconfig.machineconfiguration.openshift.io/99-master-chrony-conf-override created
  7. Apply the 99-worker-chrony-conf-override.yaml policy to the compute nodes.

    $ oc apply -f 99-worker-chrony-conf-override.yaml
    Example output
    machineconfig.machineconfiguration.openshift.io/99-worker-chrony-conf-override created
  8. Check the status of the applied NTP settings.

    $ oc describe machineconfigpool

Enabling a provisioning network after installation

The assisted installer and installer-provisioned installation for bare metal clusters provide the ability to deploy a cluster without a provisioning network. This capability is for scenarios such as proof-of-concept clusters or deploying exclusively with Redfish virtual media when each node’s baseboard management controller is routable via the baremetal network.

You can enable a provisioning network after installation using the Cluster Baremetal Operator (CBO).

Prerequisites
  • A dedicated physical network must exist, connected to all worker and control plane nodes.

  • You must isolate the native, untagged physical network.

  • The network cannot have a DHCP server when the provisioningNetwork configuration setting is set to Managed.

  • You can omit the provisioningInterface setting in OpenShift Container Platform 4.10 to use the bootMACAddress configuration setting.

Procedure
  1. When setting the provisioningInterface setting, first identify the provisioning interface name for the cluster nodes. For example, eth0 or eno1.

  2. Enable the Preboot eXecution Environment (PXE) on the provisioning network interface of the cluster nodes.

  3. Retrieve the current state of the provisioning network and save it to a provisioning custom resource (CR) file:

    $ oc get provisioning -o yaml > enable-provisioning-nw.yaml
  4. Modify the provisioning CR file:

    $ vim ~/enable-provisioning-nw.yaml

    Scroll down to the provisioningNetwork configuration setting and change it from Disabled to Managed. Then, add the provisioningIP, provisioningNetworkCIDR, provisioningDHCPRange, provisioningInterface, and watchAllNameSpaces configuration settings after the provisioningNetwork setting. Provide appropriate values for each setting.

    apiVersion: v1
    items:
    - apiVersion: metal3.io/v1alpha1
      kind: Provisioning
      metadata:
        name: provisioning-configuration
      spec:
        provisioningNetwork: (1)
        provisioningIP: (2)
        provisioningNetworkCIDR: (3)
        provisioningDHCPRange: (4)
        provisioningInterface: (5)
        watchAllNameSpaces: (6)
    1 The provisioningNetwork is one of Managed, Unmanaged, or Disabled. When set to Managed, Metal3 manages the provisioning network and the CBO deploys the Metal3 pod with a configured DHCP server. When set to Unmanaged, the system administrator configures the DHCP server manually.
    2 The provisioningIP is the static IP address that the DHCP server and ironic use to provision the network. This static IP address must be within the provisioning subnet, and outside of the DHCP range. If you configure this setting, it must have a valid IP address even if the provisioning network is Disabled. The static IP address is bound to the metal3 pod. If the metal3 pod fails and moves to another server, the static IP address also moves to the new server.
    3 The Classless Inter-Domain Routing (CIDR) address. If you configure this setting, it must have a valid CIDR address even if the provisioning network is Disabled. For example: 192.168.0.1/24.
    4 The DHCP range. This setting is only applicable to a Managed provisioning network. Omit this configuration setting if the provisioning network is Disabled. For example: 192.168.0.64, 192.168.0.253.
    5 The NIC name for the provisioning interface on cluster nodes. The provisioningInterface setting is only applicable to Managed and Unmanaged provisioning networks. Omit the provisioningInterface configuration setting if the provisioning network is Disabled. Omit the provisioningInterface configuration setting to use the bootMACAddress configuration setting instead.
    6 Set this setting to true if you want metal3 to watch namespaces other than the default openshift-machine-api namespace. The default value is false.
  5. Save the changes to the provisioning CR file.

  6. Apply the provisioning CR file to the cluster:

    $ oc apply -f enable-provisioning-nw.yaml

Creating a manifest object that includes a customized br-ex bridge

As an alternative to using the configure-ovs.sh shell script to set a customized br-ex bridge on a bare-metal platform, you can create a NodeNetworkConfigurationPolicy custom resource (CR) that includes a customized br-ex bridge network configuration.

Creating a NodeNetworkConfigurationPolicy CR that includes a customized br-ex bridge is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

This feature supports the following tasks:

  • Modifying the maximum transmission unit (MTU) for your cluster.

  • Modifying attributes of a different bond interface, such as MIImon (Media Independent Interface Monitor), bonding mode, or Quality of Service (QoS).

  • Updating DNS values.

Consider the following use cases for creating a manifest object that includes a customized br-ex bridge:

  • You want to make postinstallation changes to the bridge, such as changing the Open vSwitch (OVS) or OVN-Kubernetes br-ex bridge network. The configure-ovs.sh shell script does not support making postinstallation changes to the bridge.

  • You want to deploy the bridge on a different interface than the interface available on a host or server IP address.

  • You want to make advanced configurations to the bridge that are not possible with the configure-ovs.sh shell script. Using the script for these configurations might result in the bridge failing to connect multiple network interfaces and facilitating data forwarding between the interfaces.

Prerequisites
  • You set a customized br-ex by using the alternative method to configure-ovs.

  • You installed the Kubernetes NMState Operator.

Procedure
  • Create a NodeNetworkConfigurationPolicy (NNCP) CR and define a customized br-ex bridge network configuration. Depending on your needs, ensure that you set a masquerade IP for either the ipv4.address.ip, ipv6.address.ip, or both parameters. A masquerade IP address must match an in-use IP address block.

    As a post-installation task, you can configure most parameters for a customized br-ex bridge that you defined in an existing NNCP CR, except for the IP address.

    Example of an NNCP CR that sets IPv6 and IPv4 masquerade IP addresses
    apiVersion: nmstate.io/v1
    kind: NodeNetworkConfigurationPolicy
    metadata:
      name: worker-0-br-ex (1)
    spec:
      nodeSelector:
        kubernetes.io/hostname: worker-0
        desiredState:
        interfaces:
        - name: enp2s0 (2)
          type: ethernet (3)
          state: up (4)
          ipv4:
            enabled: false (5)
          ipv6:
            enabled: false
        - name: br-ex
          type: ovs-bridge
          state: up
          ipv4:
            enabled: false
            dhcp: false
          ipv6:
            enabled: false
            dhcp: false
          bridge:
            port:
            - name: enp2s0 (6)
            - name: br-ex
        - name: br-ex
          type: ovs-interface
          state: up
          copy-mac-from: enp2s0
          ipv4:
            enabled: true
            dhcp: true
            address:
            - ip: "169.254.169.2"
              prefix-length: 29
          ipv6:
            enabled: false
            dhcp: false
            address:
            - ip: "fd69::2"
            prefix-length: 125
    1 Name of the policy.
    2 Name of the interface.
    3 The type of ethernet.
    4 The requested state for the interface after creation.
    5 Disables IPv4 and IPv6 in this example.
    6 The node NIC to which the bridge is attached.

Services for a user-managed load balancer

You can configure an OpenShift Container Platform cluster to use a user-managed load balancer in place of the default load balancer.

Configuring a user-managed load balancer depends on your vendor’s load balancer.

The information and examples in this section are for guideline purposes only. Consult the vendor documentation for more specific information about the vendor’s load balancer.

Red Hat supports the following services for a user-managed load balancer:

  • Ingress Controller

  • OpenShift API

  • OpenShift MachineConfig API

You can choose whether you want to configure one or all of these services for a user-managed load balancer. Configuring only the Ingress Controller service is a common configuration option. To better understand each service, view the following diagrams:

An image that shows an example network workflow of an Ingress Controller operating in an OpenShift Container Platform environment.
Figure 1. Example network workflow that shows an Ingress Controller operating in an OpenShift Container Platform environment
An image that shows an example network workflow of an OpenShift API operating in an OpenShift Container Platform environment.
Figure 2. Example network workflow that shows an OpenShift API operating in an OpenShift Container Platform environment
An image that shows an example network workflow of an OpenShift MachineConfig API operating in an OpenShift Container Platform environment.
Figure 3. Example network workflow that shows an OpenShift MachineConfig API operating in an OpenShift Container Platform environment

The following configuration options are supported for user-managed load balancers:

  • Use a node selector to map the Ingress Controller to a specific set of nodes. You must assign a static IP address to each node in this set, or configure each node to receive the same IP address from the Dynamic Host Configuration Protocol (DHCP). Infrastructure nodes commonly receive this type of configuration.

  • Target all IP addresses on a subnet. This configuration can reduce maintenance overhead, because you can create and destroy nodes within those networks without reconfiguring the load balancer targets. If you deploy your ingress pods by using a machine set on a smaller network, such as a /27 or /28, you can simplify your load balancer targets.

    You can list all IP addresses that exist in a network by checking the machine config pool’s resources.

Before you configure a user-managed load balancer for your OpenShift Container Platform cluster, consider the following information:

  • For a front-end IP address, you can use the same IP address for the front-end IP address, the Ingress Controller’s load balancer, and API load balancer. Check the vendor’s documentation for this capability.

  • For a back-end IP address, ensure that an IP address for an OpenShift Container Platform control plane node does not change during the lifetime of the user-managed load balancer. You can achieve this by completing one of the following actions:

    • Assign a static IP address to each control plane node.

    • Configure each node to receive the same IP address from the DHCP every time the node requests a DHCP lease. Depending on the vendor, the DHCP lease might be in the form of an IP reservation or a static DHCP assignment.

  • Manually define each node that runs the Ingress Controller in the user-managed load balancer for the Ingress Controller back-end service. For example, if the Ingress Controller moves to an undefined node, a connection outage can occur.

Configuring a user-managed load balancer

You can configure an OpenShift Container Platform cluster to use a user-managed load balancer in place of the default load balancer.

Before you configure a user-managed load balancer, ensure that you read the "Services for a user-managed load balancer" section.

Read the following prerequisites that apply to the service that you want to configure for your user-managed load balancer.

MetalLB, which runs on a cluster, functions as a user-managed load balancer.

OpenShift API prerequisites
  • You defined a front-end IP address.

  • TCP ports 6443 and 22623 are exposed on the front-end IP address of your load balancer. Check the following items:

    • Port 6443 provides access to the OpenShift API service.

    • Port 22623 can provide ignition startup configurations to nodes.

  • The front-end IP address and port 6443 are reachable by all users of your system with a location external to your OpenShift Container Platform cluster.

  • The front-end IP address and port 22623 are reachable only by OpenShift Container Platform nodes.

  • The load balancer backend can communicate with OpenShift Container Platform control plane nodes on port 6443 and 22623.

Ingress Controller prerequisites
  • You defined a front-end IP address.

  • TCP ports 443 and 80 are exposed on the front-end IP address of your load balancer.

  • The front-end IP address, port 80 and port 443 are be reachable by all users of your system with a location external to your OpenShift Container Platform cluster.

  • The front-end IP address, port 80 and port 443 are reachable to all nodes that operate in your OpenShift Container Platform cluster.

  • The load balancer backend can communicate with OpenShift Container Platform nodes that run the Ingress Controller on ports 80, 443, and 1936.

Prerequisite for health check URL specifications

You can configure most load balancers by setting health check URLs that determine if a service is available or unavailable. OpenShift Container Platform provides these health checks for the OpenShift API, Machine Configuration API, and Ingress Controller backend services.

The following examples show health check specifications for the previously listed backend services:

Example of a Kubernetes API health check specification
Path: HTTPS:6443/readyz
Healthy threshold: 2
Unhealthy threshold: 2
Timeout: 10
Interval: 10
Example of a Machine Config API health check specification
Path: HTTPS:22623/healthz
Healthy threshold: 2
Unhealthy threshold: 2
Timeout: 10
Interval: 10
Example of an Ingress Controller health check specification
Path: HTTP:1936/healthz/ready
Healthy threshold: 2
Unhealthy threshold: 2
Timeout: 5
Interval: 10
Procedure
  1. Configure the HAProxy Ingress Controller, so that you can enable access to the cluster from your load balancer on ports 6443, 22623, 443, and 80. Depending on your needs, you can specify the IP address of a single subnet or IP addresses from multiple subnets in your HAProxy configuration.

    Example HAProxy configuration with one listed subnet
    # ...
    listen my-cluster-api-6443
        bind 192.168.1.100:6443
        mode tcp
        balance roundrobin
      option httpchk
      http-check connect
      http-check send meth GET uri /readyz
      http-check expect status 200
        server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2
        server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2
        server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2
    
    listen my-cluster-machine-config-api-22623
        bind 192.168.1.100:22623
        mode tcp
        balance roundrobin
      option httpchk
      http-check connect
      http-check send meth GET uri /healthz
      http-check expect status 200
        server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2
        server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2
        server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2
    
    listen my-cluster-apps-443
        bind 192.168.1.100:443
        mode tcp
        balance roundrobin
      option httpchk
      http-check connect
      http-check send meth GET uri /healthz/ready
      http-check expect status 200
        server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2
        server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2
        server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2
    
    listen my-cluster-apps-80
       bind 192.168.1.100:80
       mode tcp
       balance roundrobin
      option httpchk
      http-check connect
      http-check send meth GET uri /healthz/ready
      http-check expect status 200
        server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2
        server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2
        server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2
    # ...
    Example HAProxy configuration with multiple listed subnets
    # ...
    listen api-server-6443
        bind *:6443
        mode tcp
          server master-00 192.168.83.89:6443 check inter 1s
          server master-01 192.168.84.90:6443 check inter 1s
          server master-02 192.168.85.99:6443 check inter 1s
          server bootstrap 192.168.80.89:6443 check inter 1s
    
    listen machine-config-server-22623
        bind *:22623
        mode tcp
          server master-00 192.168.83.89:22623 check inter 1s
          server master-01 192.168.84.90:22623 check inter 1s
          server master-02 192.168.85.99:22623 check inter 1s
          server bootstrap 192.168.80.89:22623 check inter 1s
    
    listen ingress-router-80
        bind *:80
        mode tcp
        balance source
          server worker-00 192.168.83.100:80 check inter 1s
          server worker-01 192.168.83.101:80 check inter 1s
    
    listen ingress-router-443
        bind *:443
        mode tcp
        balance source
          server worker-00 192.168.83.100:443 check inter 1s
          server worker-01 192.168.83.101:443 check inter 1s
    
    listen ironic-api-6385
        bind *:6385
        mode tcp
        balance source
          server master-00 192.168.83.89:6385 check inter 1s
          server master-01 192.168.84.90:6385 check inter 1s
          server master-02 192.168.85.99:6385 check inter 1s
          server bootstrap 192.168.80.89:6385 check inter 1s
    
    listen inspector-api-5050
        bind *:5050
        mode tcp
        balance source
          server master-00 192.168.83.89:5050 check inter 1s
          server master-01 192.168.84.90:5050 check inter 1s
          server master-02 192.168.85.99:5050 check inter 1s
          server bootstrap 192.168.80.89:5050 check inter 1s
    # ...
  2. Use the curl CLI command to verify that the user-managed load balancer and its resources are operational:

    1. Verify that the cluster machine configuration API is accessible to the Kubernetes API server resource, by running the following command and observing the response:

      $ curl https://<loadbalancer_ip_address>:6443/version --insecure

      If the configuration is correct, you receive a JSON object in response:

      {
        "major": "1",
        "minor": "11+",
        "gitVersion": "v1.11.0+ad103ed",
        "gitCommit": "ad103ed",
        "gitTreeState": "clean",
        "buildDate": "2019-01-09T06:44:10Z",
        "goVersion": "go1.10.3",
        "compiler": "gc",
        "platform": "linux/amd64"
      }
    2. Verify that the cluster machine configuration API is accessible to the Machine config server resource, by running the following command and observing the output:

      $ curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure

      If the configuration is correct, the output from the command shows the following response:

      HTTP/1.1 200 OK
      Content-Length: 0
    3. Verify that the controller is accessible to the Ingress Controller resource on port 80, by running the following command and observing the output:

      $ curl -I -L -H "Host: console-openshift-console.apps.<cluster_name>.<base_domain>" http://<load_balancer_front_end_IP_address>

      If the configuration is correct, the output from the command shows the following response:

      HTTP/1.1 302 Found
      content-length: 0
      location: https://console-openshift-console.apps.ocp4.private.opequon.net/
      cache-control: no-cache
    4. Verify that the controller is accessible to the Ingress Controller resource on port 443, by running the following command and observing the output:

      $ curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>

      If the configuration is correct, the output from the command shows the following response:

      HTTP/1.1 200 OK
      referrer-policy: strict-origin-when-cross-origin
      set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax
      x-content-type-options: nosniff
      x-dns-prefetch-control: off
      x-frame-options: DENY
      x-xss-protection: 1; mode=block
      date: Wed, 04 Oct 2023 16:29:38 GMT
      content-type: text/html; charset=utf-8
      set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None
      cache-control: private
  3. Configure the DNS records for your cluster to target the front-end IP addresses of the user-managed load balancer. You must update records to your DNS server for the cluster API and applications over the load balancer.

    Examples of modified DNS records
    <load_balancer_ip_address>  A  api.<cluster_name>.<base_domain>
    A record pointing to Load Balancer Front End
    <load_balancer_ip_address>   A apps.<cluster_name>.<base_domain>
    A record pointing to Load Balancer Front End

    DNS propagation might take some time for each DNS record to become available. Ensure that each DNS record propagates before validating each record.

  4. For your OpenShift Container Platform cluster to use the user-managed load balancer, you must specify the following configuration in your cluster’s install-config.yaml file:

    # ...
    platform:
        loadBalancer:
          type: UserManaged (1)
          apiVIPs:
          - <api_ip> (2)
          ingressVIPs:
          - <ingress_ip> (3)
    # ...
    1 Set UserManaged for the type parameter to specify a user-managed load balancer for your cluster. The parameter defaults to OpenShiftManagedDefault, which denotes the default internal load balancer. For services defined in an openshift-kni-infra namespace, a user-managed load balancer can deploy the coredns service to pods in your cluster but ignores keepalived and haproxy services.
    2 Required parameter when you specify a user-managed load balancer. Specify the user-managed load balancer’s public IP address, so that the Kubernetes API can communicate with the user-managed load balancer.
    3 Required parameter when you specify a user-managed load balancer. Specify the user-managed load balancer’s public IP address, so that the user-managed load balancer can manage ingress traffic for your cluster.
Verification
  1. Use the curl CLI command to verify that the user-managed load balancer and DNS record configuration are operational:

    1. Verify that you can access the cluster API, by running the following command and observing the output:

      $ curl https://api.<cluster_name>.<base_domain>:6443/version --insecure

      If the configuration is correct, you receive a JSON object in response:

      {
        "major": "1",
        "minor": "11+",
        "gitVersion": "v1.11.0+ad103ed",
        "gitCommit": "ad103ed",
        "gitTreeState": "clean",
        "buildDate": "2019-01-09T06:44:10Z",
        "goVersion": "go1.10.3",
        "compiler": "gc",
        "platform": "linux/amd64"
        }
    2. Verify that you can access the cluster machine configuration, by running the following command and observing the output:

      $ curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure

      If the configuration is correct, the output from the command shows the following response:

      HTTP/1.1 200 OK
      Content-Length: 0
    3. Verify that you can access each cluster application on port, by running the following command and observing the output:

      $ curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure

      If the configuration is correct, the output from the command shows the following response:

      HTTP/1.1 302 Found
      content-length: 0
      location: https://console-openshift-console.apps.<cluster-name>.<base domain>/
      cache-control: no-cacheHTTP/1.1 200 OK
      referrer-policy: strict-origin-when-cross-origin
      set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure
      x-content-type-options: nosniff
      x-dns-prefetch-control: off
      x-frame-options: DENY
      x-xss-protection: 1; mode=block
      date: Tue, 17 Nov 2020 08:42:10 GMT
      content-type: text/html; charset=utf-8
      set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None
      cache-control: private
    4. Verify that you can access each cluster application on port 443, by running the following command and observing the output:

      $ curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure

      If the configuration is correct, the output from the command shows the following response:

      HTTP/1.1 200 OK
      referrer-policy: strict-origin-when-cross-origin
      set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax
      x-content-type-options: nosniff
      x-dns-prefetch-control: off
      x-frame-options: DENY
      x-xss-protection: 1; mode=block
      date: Wed, 04 Oct 2023 16:29:38 GMT
      content-type: text/html; charset=utf-8
      set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None
      cache-control: private

Configuration using the Bare Metal Operator

When deploying OpenShift Container Platform on bare-metal hosts, there are times when you need to make changes to the host either before or after provisioning. This can include inspecting the host’s hardware, firmware, and firmware details. It can also include formatting disks or changing modifiable firmware settings.

You can use the Bare Metal Operator (BMO) to provision, manage, and inspect bare-metal hosts in your cluster. The BMO can complete the following operations:

  • Provision bare-metal hosts to the cluster with a specific image.

  • Turn on or off a host.

  • Inspect hardware details of the host and report them to the bare-metal host.

  • Upgrade or downgrade a host’s firmware to a specific version.

  • Inspect firmware and configure BIOS settings.

  • Clean disk contents for the host before or after provisioning the host.

The BMO uses the following resources to complete these tasks:

  • BareMetalHost

  • HostFirmwareSettings

  • FirmwareSchema

  • HostFirmwareComponents

The BMO maintains an inventory of the physical hosts in the cluster by mapping each bare-metal host to an instance of the BareMetalHost custom resource definition. Each BareMetalHost resource features hardware, software, and firmware details. The BMO continually inspects the bare-metal hosts in the cluster to ensure each BareMetalHost resource accurately details the components of the corresponding host.

The BMO also uses the HostFirmwareSettings resource, the FirmwareSchema resource, and the HostFirmwareComponents resource to detail firmware specifications and upgrade or downgrade firmware for the bare-metal host.

The BMO interfaces with bare-metal hosts in the cluster by using the Ironic API service. The Ironic service uses the Baseboard Management Controller (BMC) on the host to interface with the machine.

Bare Metal Operator architecture

The Bare Metal Operator (BMO) uses the following resources to provision, manage, and inspect bare-metal hosts in your cluster. The following diagram illustrates the architecture of these resources:

BMO architecture overview
BareMetalHost

The BareMetalHost resource defines a physical host and its properties. When you provision a bare-metal host to the cluster, you must define a BareMetalHost resource for that host. For ongoing management of the host, you can inspect the information in the BareMetalHost or update this information.

The BareMetalHost resource features provisioning information such as the following:

  • Deployment specifications such as the operating system boot image or the custom RAM disk

  • Provisioning state

  • Baseboard Management Controller (BMC) address

  • Desired power state

The BareMetalHost resource features hardware information such as the following:

  • Number of CPUs

  • MAC address of a NIC

  • Size of the host’s storage device

  • Current power state

HostFirmwareSettings

You can use the HostFirmwareSettings resource to retrieve and manage the firmware settings for a host. When a host moves to the Available state, the Ironic service reads the host’s firmware settings and creates the HostFirmwareSettings resource. There is a one-to-one mapping between the BareMetalHost resource and the HostFirmwareSettings resource.

You can use the HostFirmwareSettings resource to inspect the firmware specifications for a host or to update a host’s firmware specifications.

You must adhere to the schema specific to the vendor firmware when you edit the spec field of the HostFirmwareSettings resource. This schema is defined in the read-only FirmwareSchema resource.

FirmwareSchema

Firmware settings vary among hardware vendors and host models. A FirmwareSchema resource is a read-only resource that contains the types and limits for each firmware setting on each host model. The data comes directly from the BMC by using the Ironic service. The FirmwareSchema resource enables you to identify valid values you can specify in the spec field of the HostFirmwareSettings resource.

A FirmwareSchema resource can apply to many BareMetalHost resources if the schema is the same.

HostFirmwareComponents

Metal3 provides the HostFirmwareComponents resource, which describes BIOS and baseboard management controller (BMC) firmware versions. You can upgrade or downgrade the host’s firmware to a specific version by editing the spec field of the HostFirmwareComponents resource. This is useful when deploying with validated patterns that have been tested against specific firmware versions.

About the BareMetalHost resource

Metal3 introduces the concept of the BareMetalHost resource, which defines a physical host and its properties. The BareMetalHost resource contains two sections:

  1. The BareMetalHost spec

  2. The BareMetalHost status

The BareMetalHost spec

The spec section of the BareMetalHost resource defines the desired state of the host.

Table 1. BareMetalHost spec
Parameters Description

automatedCleaningMode

An interface to enable or disable automated cleaning during provisioning and de-provisioning. When set to disabled, it skips automated cleaning. When set to metadata, automated cleaning is enabled. The default setting is metadata.

bmc:
  address:
  credentialsName:
  disableCertificateVerification:

The bmc configuration setting contains the connection information for the baseboard management controller (BMC) on the host. The fields are:

  • address: The URL for communicating with the host’s BMC controller.

  • credentialsName: A reference to a secret containing the username and password for the BMC.

  • disableCertificateVerification: A boolean to skip certificate validation when set to true.

bootMACAddress

The MAC address of the NIC used for provisioning the host.

bootMode

The boot mode of the host. It defaults to UEFI, but it can also be set to legacy for BIOS boot, or UEFISecureBoot.

consumerRef

A reference to another resource that is using the host. It could be empty if another resource is not currently using the host. For example, a Machine resource might use the host when the machine-api is using the host.

description

A human-provided string to help identify the host.

externallyProvisioned

A boolean indicating whether the host provisioning and deprovisioning are managed externally. When set:

  • Power status can still be managed using the online field.

  • Hardware inventory will be monitored, but no provisioning or deprovisioning operations are performed on the host.

firmware

Contains information about the BIOS configuration of bare metal hosts. Currently, firmware is only supported by iRMC, iDRAC, iLO4 and iLO5 BMCs. The sub fields are:

  • simultaneousMultithreadingEnabled: Allows a single physical processor core to appear as several logical processors. Valid settings are true or false.

  • sriovEnabled: SR-IOV support enables a hypervisor to create virtual instances of a PCI-express device, potentially increasing performance. Valid settings are true or false.

  • virtualizationEnabled: Supports the virtualization of platform hardware. Valid settings are true or false.

image:
  url:
  checksum:
  checksumType:
  format:

The image configuration setting holds the details for the image to be deployed on the host. Ironic requires the image fields. However, when the externallyProvisioned configuration setting is set to true and the external management does not require power control, the fields can be empty. The setting supports the following fields:

  • url: The URL of an image to deploy to the host.

  • checksum: The actual checksum or a URL to a file containing the checksum for the image at image.url.

  • checksumType: You can specify checksum algorithms. Currently image.checksumType only supports md5, sha256, and sha512. The default checksum type is md5.

  • format: This is the disk format of the image. It can be one of raw, qcow2, vdi, vmdk, live-iso or be left unset. Setting it to raw enables raw image streaming in the Ironic agent for that image. Setting it to live-iso enables iso images to live boot without deploying to disk, and it ignores the checksum fields.

networkData

A reference to the secret containing the network configuration data and its namespace, so that it can be attached to the host before the host boots to set up the network.

online

A boolean indicating whether the host should be powered on (true) or off (false). Changing this value will trigger a change in the power state of the physical host.

raid:
  hardwareRAIDVolumes:
  softwareRAIDVolumes:

(Optional) Contains the information about the RAID configuration for bare metal hosts. If not specified, it retains the current configuration.

OpenShift Container Platform 4.17 supports hardware RAID for BMCs, including:

  • Fujitsu iRMC with support for RAID levels 0, 1, 5, 6, and 10

  • Dell iDRAC using the Redfish API with firmware version 6.10.30.20 or later and RAID levels 0, 1, and 5

OpenShift Container Platform 4.17 does not support software RAID.

See the following configuration settings:

  • hardwareRAIDVolumes: Contains the list of logical drives for hardware RAID, and defines the desired volume configuration in the hardware RAID. If you do not specify rootDeviceHints, the first volume is the root volume. The sub-fields are:

    • level: The RAID level for the logical drive. The following levels are supported: 0,1,2,5,6,1+0,5+0,6+0.

    • name: The name of the volume as a string. It should be unique within the server. If not specified, the volume name will be auto-generated.

    • numberOfPhysicalDisks: The number of physical drives as an integer to use for the logical drove. Defaults to the minimum number of disk drives required for the particular RAID level.

    • physicalDisks: The list of names of physical disk drives as a string. This is an optional field. If specified, the controller field must be specified too.

    • controller: (Optional) The name of the RAID controller as a string to use in the hardware RAID volume.

    • rotational: If set to true, it will only select rotational disk drives. If set to false, it will only select solid-state and NVMe drives. If not set, it selects any drive types, which is the default behavior.

    • sizeGibibytes: The size of the logical drive as an integer to create in GiB. If unspecified or set to 0, it will use the maximum capacity of physical drive for the logical drive.

  • softwareRAIDVolumes: OpenShift Container Platform 4.17 does not support software RAID. The following information is for reference only. This configuration contains the list of logical disks for software RAID. If you do not specify rootDeviceHints, the first volume is the root volume. If you set HardwareRAIDVolumes, this item will be invalid. Software RAIDs will always be deleted. The number of created software RAID devices must be 1 or 2. If there is only one software RAID device, it must be RAID-1. If there are two RAID devices, the first device must be RAID-1, while the RAID level for the second device can be 0, 1, or 1+0. The first RAID device will be the deployment device. Therefore, enforcing RAID-1 reduces the risk of a non-booting node in case of a device failure. The softwareRAIDVolume field defines the desired configuration of the volume in the software RAID. The sub-fields are:

    • level: The RAID level for the logical drive. The following levels are supported: 0,1,1+0.

    • physicalDisks: A list of device hints. The number of items should be greater than or equal to 2.

    • sizeGibibytes: The size of the logical disk drive as an integer to be created in GiB. If unspecified or set to 0, it will use the maximum capacity of physical drive for logical drive.

You can set the hardwareRAIDVolume as an empty slice to clear the hardware RAID configuration. For example:

spec:
   raid:
     hardwareRAIDVolume: []

If you receive an error message indicating that the driver does not support RAID, set the raid, hardwareRAIDVolumes or softwareRAIDVolumes to nil. You might need to ensure the host has a RAID controller.

rootDeviceHints:
  deviceName:
  hctl:
  model:
  vendor:
  serialNumber:
  minSizeGigabytes:
  wwn:
  wwnWithExtension:
  wwnVendorExtension:
  rotational:

The rootDeviceHints parameter enables provisioning of the RHCOS image to a particular device. It examines the devices in the order it discovers them, and compares the discovered values with the hint values. It uses the first discovered device that matches the hint value. The configuration can combine multiple hints, but a device must match all hints to get selected. The fields are:

  • deviceName: A string containing a Linux device name like /dev/vda. The hint must match the actual value exactly.

  • hctl: A string containing a SCSI bus address like 0:0:0:0. The hint must match the actual value exactly.

  • model: A string containing a vendor-specific device identifier. The hint can be a substring of the actual value.

  • vendor: A string containing the name of the vendor or manufacturer of the device. The hint can be a sub-string of the actual value.

  • serialNumber: A string containing the device serial number. The hint must match the actual value exactly.

  • minSizeGigabytes: An integer representing the minimum size of the device in gigabytes.

  • wwn: A string containing the unique storage identifier. The hint must match the actual value exactly.

  • wwnWithExtension: A string containing the unique storage identifier with the vendor extension appended. The hint must match the actual value exactly.

  • wwnVendorExtension: A string containing the unique vendor storage identifier. The hint must match the actual value exactly.

  • rotational: A boolean indicating whether the device should be a rotating disk (true) or not (false).

The BareMetalHost status

The BareMetalHost status represents the host’s current state, and includes tested credentials, current hardware details, and other information.

Table 2. BareMetalHost status
Parameters Description

goodCredentials

A reference to the secret and its namespace holding the last set of baseboard management controller (BMC) credentials the system was able to validate as working.

errorMessage

Details of the last error reported by the provisioning backend, if any.

errorType

Indicates the class of problem that has caused the host to enter an error state. The error types are:

  • provisioned registration error: Occurs when the controller is unable to re-register an already provisioned host.

  • registration error: Occurs when the controller is unable to connect to the host’s baseboard management controller.

  • inspection error: Occurs when an attempt to obtain hardware details from the host fails.

  • preparation error: Occurs when cleaning fails.

  • provisioning error: Occurs when the controller fails to provision or deprovision the host.

  • power management error: Occurs when the controller is unable to modify the power state of the host.

  • detach error: Occurs when the controller is unable to detatch the host from the provisioner.

hardware:
  cpu
    arch:
    model:
    clockMegahertz:
    flags:
    count:

The hardware.cpu field details of the CPU(s) in the system. The fields include:

  • arch: The architecture of the CPU.

  • model: The CPU model as a string.

  • clockMegahertz: The speed in MHz of the CPU.

  • flags: The list of CPU flags. For example, 'mmx','sse','sse2','vmx' etc.

  • count: The number of CPUs available in the system.

hardware:
  firmware:

Contains BIOS firmware information. For example, the hardware vendor and version.

hardware:
  nics:
  - ip:
    name:
    mac:
    speedGbps:
    vlans:
    vlanId:
    pxe:

The hardware.nics field contains a list of network interfaces for the host. The fields include:

  • ip: The IP address of the NIC, if one was assigned when the discovery agent ran.

  • name: A string identifying the network device. For example, nic-1.

  • mac: The MAC address of the NIC.

  • speedGbps: The speed of the device in Gbps.

  • vlans: A list holding all the VLANs available for this NIC.

  • vlanId: The untagged VLAN ID.

  • pxe: Whether the NIC is able to boot using PXE.

hardware:
  ramMebibytes:

The host’s amount of memory in Mebibytes (MiB).

hardware:
  storage:
  - name:
    rotational:
    sizeBytes:
    serialNumber:

The hardware.storage field contains a list of storage devices available to the host. The fields include:

  • name: A string identifying the storage device. For example, disk 1 (boot).

  • rotational: Indicates whether the disk is rotational, and returns either true or false.

  • sizeBytes: The size of the storage device.

  • serialNumber: The device’s serial number.

hardware:
  systemVendor:
    manufacturer:
    productName:
    serialNumber:

Contains information about the host’s manufacturer, the productName, and the serialNumber.

lastUpdated

The timestamp of the last time the status of the host was updated.

operationalStatus

The status of the server. The status is one of the following:

  • OK: Indicates all the details for the host are known, correctly configured, working, and manageable.

  • discovered: Implies some of the host’s details are either not working correctly or missing. For example, the BMC address is known but the login credentials are not.

  • error: Indicates the system found some sort of irrecoverable error. Refer to the errorMessage field in the status section for more details.

  • delayed: Indicates that provisioning is delayed to limit simultaneous provisioning of multiple hosts.

  • detached: Indicates the host is marked unmanaged.

poweredOn

Boolean indicating whether the host is powered on.

provisioning:
  state:
  id:
  image:
  raid:
  firmware:
  rootDeviceHints:

The provisioning field contains values related to deploying an image to the host. The sub-fields include:

  • state: The current state of any ongoing provisioning operation. The states include:

    • <empty string>: There is no provisioning happening at the moment.

    • unmanaged: There is insufficient information available to register the host.

    • registering: The agent is checking the host’s BMC details.

    • match profile: The agent is comparing the discovered hardware details on the host against known profiles.

    • available: The host is available for provisioning. This state was previously known as ready.

    • preparing: The existing configuration will be removed, and the new configuration will be set on the host.

    • provisioning: The provisioner is writing an image to the host’s storage.

    • provisioned: The provisioner wrote an image to the host’s storage.

    • externally provisioned: Metal3 does not manage the image on the host.

    • deprovisioning: The provisioner is wiping the image from the host’s storage.

    • inspecting: The agent is collecting hardware details for the host.

    • deleting: The agent is deleting the from the cluster.

  • id: The unique identifier for the service in the underlying provisioning tool.

  • image: The image most recently provisioned to the host.

  • raid: The list of hardware or software RAID volumes recently set.

  • firmware: The BIOS configuration for the bare metal server.

  • rootDeviceHints: The root device selection instructions used for the most recent provisioning operation.

triedCredentials

A reference to the secret and its namespace holding the last set of BMC credentials that were sent to the provisioning backend.

Getting the BareMetalHost resource

The BareMetalHost resource contains the properties of a physical host. You must get the BareMetalHost resource for a physical host to review its properties.

Procedure
  1. Get the list of BareMetalHost resources:

    $ oc get bmh -n openshift-machine-api -o yaml

    You can use baremetalhost as the long form of bmh with oc get command.

  2. Get the list of hosts:

    $ oc get bmh -n openshift-machine-api
  3. Get the BareMetalHost resource for a specific host:

    $ oc get bmh <host_name> -n openshift-machine-api -o yaml

    Where <host_name> is the name of the host.

    Example output
    apiVersion: metal3.io/v1alpha1
    kind: BareMetalHost
    metadata:
      creationTimestamp: "2022-06-16T10:48:33Z"
      finalizers:
      - baremetalhost.metal3.io
      generation: 2
      name: openshift-worker-0
      namespace: openshift-machine-api
      resourceVersion: "30099"
      uid: 1513ae9b-e092-409d-be1b-ad08edeb1271
    spec:
      automatedCleaningMode: metadata
      bmc:
        address: redfish://10.46.61.19:443/redfish/v1/Systems/1
        credentialsName: openshift-worker-0-bmc-secret
        disableCertificateVerification: true
      bootMACAddress: 48:df:37:c7:f7:b0
      bootMode: UEFI
      consumerRef:
        apiVersion: machine.openshift.io/v1beta1
        kind: Machine
        name: ocp-edge-958fk-worker-0-nrfcg
        namespace: openshift-machine-api
      customDeploy:
        method: install_coreos
      online: true
      rootDeviceHints:
        deviceName: /dev/disk/by-id/scsi-<serial_number>
      userData:
        name: worker-user-data-managed
        namespace: openshift-machine-api
    status:
      errorCount: 0
      errorMessage: ""
      goodCredentials:
        credentials:
          name: openshift-worker-0-bmc-secret
          namespace: openshift-machine-api
        credentialsVersion: "16120"
      hardware:
        cpu:
          arch: x86_64
          clockMegahertz: 2300
          count: 64
          flags:
          - 3dnowprefetch
          - abm
          - acpi
          - adx
          - aes
          model: Intel(R) Xeon(R) Gold 5218 CPU @ 2.30GHz
        firmware:
          bios:
            date: 10/26/2020
            vendor: HPE
            version: U30
        hostname: openshift-worker-0
        nics:
        - mac: 48:df:37:c7:f7:b3
          model: 0x8086 0x1572
          name: ens1f3
        ramMebibytes: 262144
        storage:
        - hctl: "0:0:0:0"
          model: VK000960GWTTB
          name: /dev/disk/by-id/scsi-<serial_number>
          sizeBytes: 960197124096
          type: SSD
          vendor: ATA
        systemVendor:
          manufacturer: HPE
          productName: ProLiant DL380 Gen10 (868703-B21)
          serialNumber: CZ200606M3
      lastUpdated: "2022-06-16T11:41:42Z"
      operationalStatus: OK
      poweredOn: true
      provisioning:
        ID: 217baa14-cfcf-4196-b764-744e184a3413
        bootMode: UEFI
        customDeploy:
          method: install_coreos
        image:
          url: ""
        raid:
          hardwareRAIDVolumes: null
          softwareRAIDVolumes: []
        rootDeviceHints:
          deviceName: /dev/disk/by-id/scsi-<serial_number>
        state: provisioned
      triedCredentials:
        credentials:
          name: openshift-worker-0-bmc-secret
          namespace: openshift-machine-api
        credentialsVersion: "16120"

Editing a BareMetalHost resource

After you deploy an OpenShift Container Platform cluster on bare metal, you might need to edit a node’s BareMetalHost resource. Consider the following examples:

  • You deploy a cluster with the Assisted Installer and need to add or edit the baseboard management controller (BMC) host name or IP address.

  • You want to move a node from one cluster to another without deprovisioning it.

Prerequisites
  • Ensure the node is in the Provisioned, ExternallyProvisioned, or Available state.

Procedure
  1. Get the list of nodes:

    $ oc get bmh -n openshift-machine-api
  2. Before editing the node’s BareMetalHost resource, detach the node from Ironic by running the following command:

    $ oc annotate baremetalhost <node_name> -n openshift-machine-api 'baremetalhost.metal3.io/detached=true' (1)
    1 Replace <node_name> with the name of the node.
  3. Edit the BareMetalHost resource by running the following command:

    $ oc edit bmh <node_name> -n openshift-machine-api
  4. Reattach the node to Ironic by running the following command:

    $ oc annotate baremetalhost <node_name> -n openshift-machine-api 'baremetalhost.metal3.io/detached'-

Troubleshooting latency when deleting a BareMetalHost resource

When the Bare Metal Operator (BMO) deletes a BareMetalHost resource, Ironic deprovisions the bare-metal host with a process called cleaning. When cleaning fails, Ironic retries the cleaning process three times, which is the source of the latency. The cleaning process might not succeed, causing the provisioning status of the bare-metal host to remain in the deleting state indefinitely. When this occurs, use the following procedure to disable the cleaning process.

Do not remove finalizers from the BareMetalHost resource.

Procedure
  1. If the cleaning process fails and restarts, wait for it to finish. This might take about 5 minutes.

  2. If the provisioning status remains in the deleting state, disable the cleaning process by modifying the BareMetalHost resource and setting the automatedCleaningMode field to disabled.

See "Editing a BareMetalHost resource" for additional details.

Attaching a non-bootable ISO to a bare-metal node

You can attach a generic, non-bootable ISO virtual media image to a provisioned node by using the DataImage resource. After you apply the resource, the ISO image becomes accessible to the operating system after it has booted. This is useful for configuring a node after provisioning the operating system and before the node boots for the first time.

Prerequisites
  • The node must use Redfish or drivers derived from it to support this feature.

  • The node must be in the Provisioned or ExternallyProvisioned state.

  • The name must be the same as the name of the node defined in its BareMetalHost resource.

  • You have a valid url to the ISO image.

Procedure
  1. Create a DataImage resource:

    apiVersion: metal3.io/v1alpha1
    kind: DataImage
    metadata:
      name: <node_name> (1)
    spec:
      url: "http://dataimage.example.com/non-bootable.iso" (2)
    1 Specify the name of the node as defined in its BareMetalHost resource.
    2 Specify the URL and path to the ISO image.
  2. Save the DataImage resource to a file by running the following command:

    $ vim <node_name>-dataimage.yaml
  3. Apply the DataImage resource by running the following command:

    $ oc apply -f <node_name>-dataimage.yaml -n <node_namespace> (1)
    1 Replace <node_namespace> so that the namespace matches the namespace for the BareMetalHost resource. For example, openshift-machine-api.
  4. Reboot the node.

    To reboot the node, attach the reboot.metal3.io annotation, or reset set the online status in the BareMetalHost resource. A forced reboot of the bare-metal node will change the state of the node to NotReady for awhile. For example, 5 minutes or more.

  5. View the DataImage resource by running the following command:

    $ oc get dataimage <node_name> -n openshift-machine-api -o yaml
    Example output
    apiVersion: v1
    items:
    - apiVersion: metal3.io/v1alpha1
      kind: DataImage
      metadata:
        annotations:
          kubectl.kubernetes.io/last-applied-configuration: |
            {"apiVersion":"metal3.io/v1alpha1","kind":"DataImage","metadata":{"annotations":{},"name":"bmh-node-1","namespace":"openshift-machine-api"},"spec":{"url":"http://dataimage.example.com/non-bootable.iso"}}
        creationTimestamp: "2024-06-10T12:00:00Z"
        finalizers:
        - dataimage.metal3.io
        generation: 1
        name: bmh-node-1
        namespace: openshift-machine-api
        ownerReferences:
        - apiVersion: metal3.io/v1alpha1
          blockOwnerDeletion: true
          controller: true
          kind: BareMetalHost
          name: bmh-node-1
          uid: 046cdf8e-0e97-485a-8866-e62d20e0f0b3
        resourceVersion: "21695581"
        uid: c5718f50-44b6-4a22-a6b7-71197e4b7b69
      spec:
        url: http://dataimage.example.com/non-bootable.iso
      status:
        attachedImage:
          url: http://dataimage.example.com/non-bootable.iso
        error:
          count: 0
          message: ""
        lastReconciled: "2024-06-10T12:05:00Z"

About the HostFirmwareSettings resource

You can use the HostFirmwareSettings resource to retrieve and manage the BIOS settings for a host. When a host moves to the Available state, Ironic reads the host’s BIOS settings and creates the HostFirmwareSettings resource. The resource contains the complete BIOS configuration returned from the baseboard management controller (BMC). Whereas, the firmware field in the BareMetalHost resource returns three vendor-independent fields, the HostFirmwareSettings resource typically comprises many BIOS settings of vendor-specific fields per host.

The HostFirmwareSettings resource contains two sections:

  1. The HostFirmwareSettings spec.

  2. The HostFirmwareSettings status.

The HostFirmwareSettings spec

The spec section of the HostFirmwareSettings resource defines the desired state of the host’s BIOS, and it is empty by default. Ironic uses the settings in the spec.settings section to update the baseboard management controller (BMC) when the host is in the Preparing state. Use the FirmwareSchema resource to ensure that you do not send invalid name/value pairs to hosts. See "About the FirmwareSchema resource" for additional details.

Example
spec:
  settings:
    ProcTurboMode: Disabled(1)
1 In the foregoing example, the spec.settings section contains a name/value pair that will set the ProcTurboMode BIOS setting to Disabled.

Integer parameters listed in the status section appear as strings. For example, "1". When setting integers in the spec.settings section, the values should be set as integers without quotes. For example, 1.

The HostFirmwareSettings status

The status represents the current state of the host’s BIOS.

Table 3. HostFirmwareSettings
Parameters Description
status:
  conditions:
  - lastTransitionTime:
    message:
    observedGeneration:
    reason:
    status:
    type:

The conditions field contains a list of state changes. The sub-fields include:

  • lastTransitionTime: The last time the state changed.

  • message: A description of the state change.

  • observedGeneration: The current generation of the status. If metadata.generation and this field are not the same, the status.conditions might be out of date.

  • reason: The reason for the state change.

  • status: The status of the state change. The status can be True, False or Unknown.

  • type: The type of state change. The types are Valid and ChangeDetected.

status:
  schema:
    name:
    namespace:
    lastUpdated:

The FirmwareSchema for the firmware settings. The fields include:

  • name: The name or unique identifier referencing the schema.

  • namespace: The namespace where the schema is stored.

  • lastUpdated: The last time the resource was updated.

status:
  settings:

The settings field contains a list of name/value pairs of a host’s current BIOS settings.

Getting the HostFirmwareSettings resource

The HostFirmwareSettings resource contains the vendor-specific BIOS properties of a physical host. You must get the HostFirmwareSettings resource for a physical host to review its BIOS properties.

Procedure
  1. Get the detailed list of HostFirmwareSettings resources:

    $ oc get hfs -n openshift-machine-api -o yaml

    You can use hostfirmwaresettings as the long form of hfs with the oc get command.

  2. Get the list of HostFirmwareSettings resources:

    $ oc get hfs -n openshift-machine-api
  3. Get the HostFirmwareSettings resource for a particular host

    $ oc get hfs <host_name> -n openshift-machine-api -o yaml

    Where <host_name> is the name of the host.

Editing the HostFirmwareSettings resource

You can edit the HostFirmwareSettings of provisioned hosts.

You can only edit hosts when they are in the provisioned state, excluding read-only values. You cannot edit hosts in the externally provisioned state.

Procedure
  1. Get the list of HostFirmwareSettings resources:

    $ oc get hfs -n openshift-machine-api
  2. Edit a host’s HostFirmwareSettings resource:

    $ oc edit hfs <host_name> -n openshift-machine-api

    Where <host_name> is the name of a provisioned host. The HostFirmwareSettings resource will open in the default editor for your terminal.

  3. Add name/value pairs to the spec.settings section:

    Example
    spec:
      settings:
        name: value (1)
    
    1 Use the FirmwareSchema resource to identify the available settings for the host. You cannot set values that are read-only.
  4. Save the changes and exit the editor.

  5. Get the host’s machine name:

     $ oc get bmh <host_name> -n openshift-machine name

    Where <host_name> is the name of the host. The machine name appears under the CONSUMER field.

  6. Annotate the machine to delete it from the machineset:

    $ oc annotate machine <machine_name> machine.openshift.io/delete-machine=true -n openshift-machine-api

    Where <machine_name> is the name of the machine to delete.

  7. Get a list of nodes and count the number of worker nodes:

    $ oc get nodes
  8. Get the machineset:

    $ oc get machinesets -n openshift-machine-api
  9. Scale the machineset:

    $ oc scale machineset <machineset_name> -n openshift-machine-api --replicas=<n-1>

    Where <machineset_name> is the name of the machineset and <n-1> is the decremented number of worker nodes.

  10. When the host enters the Available state, scale up the machineset to make the HostFirmwareSettings resource changes take effect:

    $ oc scale machineset <machineset_name> -n openshift-machine-api --replicas=<n>

    Where <machineset_name> is the name of the machineset and <n> is the number of worker nodes.

Verifying the HostFirmware Settings resource is valid

When the user edits the spec.settings section to make a change to the HostFirmwareSetting(HFS) resource, the Bare Metal Operator (BMO) validates the change against the FimwareSchema resource, which is a read-only resource. If the setting is invalid, the BMO will set the Type value of the status.Condition setting to False and also generate an event and store it in the HFS resource. Use the following procedure to verify that the resource is valid.

Procedure
  1. Get a list of HostFirmwareSetting resources:

    $ oc get hfs -n openshift-machine-api
  2. Verify that the HostFirmwareSettings resource for a particular host is valid:

    $ oc describe hfs <host_name> -n openshift-machine-api

    Where <host_name> is the name of the host.

    Example output
    Events:
      Type    Reason            Age    From                                    Message
      ----    ------            ----   ----                                    -------
      Normal  ValidationFailed  2m49s  metal3-hostfirmwaresettings-controller  Invalid BIOS setting: Setting ProcTurboMode is invalid, unknown enumeration value - Foo

    If the response returns ValidationFailed, there is an error in the resource configuration and you must update the values to conform to the FirmwareSchema resource.

About the FirmwareSchema resource

BIOS settings vary among hardware vendors and host models. A FirmwareSchema resource is a read-only resource that contains the types and limits for each BIOS setting on each host model. The data comes directly from the BMC through Ironic. The FirmwareSchema enables you to identify valid values you can specify in the spec field of the HostFirmwareSettings resource. The FirmwareSchema resource has a unique identifier derived from its settings and limits. Identical host models use the same FirmwareSchema identifier. It is likely that multiple instances of HostFirmwareSettings use the same FirmwareSchema.

Table 4. FirmwareSchema specification
Parameters Description
<BIOS_setting_name>
  attribute_type:
  allowable_values:
  lower_bound:
  upper_bound:
  min_length:
  max_length:
  read_only:
  unique:

The spec is a simple map consisting of the BIOS setting name and the limits of the setting. The fields include:

  • attribute_type: The type of setting. The supported types are:

    • Enumeration

    • Integer

    • String

    • Boolean

  • allowable_values: A list of allowable values when the attribute_type is Enumeration.

  • lower_bound: The lowest allowed value when attribute_type is Integer.

  • upper_bound: The highest allowed value when attribute_type is Integer.

  • min_length: The shortest string length that the value can have when attribute_type is String.

  • max_length: The longest string length that the value can have when attribute_type is String.

  • read_only: The setting is read only and cannot be modified.

  • unique: The setting is specific to this host.

Getting the FirmwareSchema resource

Each host model from each vendor has different BIOS settings. When editing the HostFirmwareSettings resource’s spec section, the name/value pairs you set must conform to that host’s firmware schema. To ensure you are setting valid name/value pairs, get the FirmwareSchema for the host and review it.

Procedure
  1. To get a list of FirmwareSchema resource instances, execute the following:

    $ oc get firmwareschema -n openshift-machine-api
  2. To get a particular FirmwareSchema instance, execute:

    $ oc get firmwareschema <instance_name> -n openshift-machine-api -o yaml

    Where <instance_name> is the name of the schema instance stated in the HostFirmwareSettings resource (see Table 3).

About the HostFirmwareComponents resource

Metal3 provides the HostFirmwareComponents resource, which describes BIOS and baseboard management controller (BMC) firmware versions. The HostFirmwareComponents resource contains two sections:

  1. The HostFirmwareComponents spec

  2. The HostFirmwareComponents status

HostFirmwareComponents spec

The spec section of the HostFirmwareComponents resource defines the desired state of the host’s BIOS and BMC versions.

Table 5. HostFirmwareComponents spec
Parameters Description
updates:
  component:
  url:

The updates configuration setting contains the components to update. The fields are:

  • component: The name of the component. The valid settings are bios or bmc.

  • url: The URL to the component’s firmware specification and version.

HostFirmwareComponents status

The status section of the HostFirmwareComponents resource returns the current status of the host’s BIOS and BMC versions.

Table 6. HostFirmwareComponents status
Parameters Description
components:
  component:
  initialVersion:
  currentVersion:
  lastVersionFlashed:
  updatedAt:

The components section contains the status of the components. The fields are:

  • component: The name of the firmware component. It returns bios or bmc.

  • initialVersion: The initial firmware version of the component. Ironic retrieves this information when creating the BareMetalHost resource. You cannot change it.

  • currentVersion: The current firmware version of the component. Initially, the value matches the initialVersion value until Ironic updates the firmware on the bare-metal host.

  • lastVersionFlashed: The last firmware version of the component flashed on the bare-metal host. This field returns null until Ironic updates the firmware.

  • updatedAt: The timestamp when Ironic updated the bare-metal host’s firmware.

updates:
  component:
  url:

The updates configuration setting contains the updated components. The fields are:

  • component: The name of the component.

  • url: The URL to the component’s firmware specification and version.

Getting the HostFirmwareComponents resource

The HostFirmwareComponents resource contains the specific firmware version of the BIOS and baseboard management controller (BMC) of a physical host. You must get the HostFirmwareComponents resource for a physical host to review the firmware version and status.

Procedure
  1. Get the detailed list of HostFirmwareComponents resources:

    $ oc get hostfirmwarecomponents -n openshift-machine-api -o yaml
  2. Get the list of HostFirmwareComponents resources:

    $ oc get hostfirmwarecomponents -n openshift-machine-api
  3. Get the HostFirmwareComponents resource for a particular host:

    $ oc get hostfirmwarecomponents <host_name> -n openshift-machine-api -o yaml

    Where <host_name> is the name of the host.

    Example output
    ---
    apiVersion: metal3.io/v1alpha1
    kind: HostFirmwareComponents
    metadata:
      creationTimestamp: 2024-04-25T20:32:06Z"
      generation: 1
      name: ostest-master-2
      namespace: openshift-machine-api
      ownerReferences:
      - apiVersion: metal3.io/v1alpha1
        blockOwnerDeletion: true
        controller: true
        kind: BareMetalHost
        name: ostest-master-2
        uid: 16022566-7850-4dc8-9e7d-f216211d4195
      resourceVersion: "2437"
      uid: 2038d63f-afc0-4413-8ffe-2f8e098d1f6c
    spec:
      updates: []
    status:
      components:
      - component: bios
        currentVersion: 1.0.0
        initialVersion: 1.0.0
      - component: bmc
        currentVersion: "1.00"
        initialVersion: "1.00"
      conditions:
      - lastTransitionTime: "2024-04-25T20:32:06Z"
        message: ""
        observedGeneration: 1
        reason: OK
        status: "True"
        type: Valid
      - lastTransitionTime: "2024-04-25T20:32:06Z"
        message: ""
        observedGeneration: 1
        reason: OK
        status: "False"
        type: ChangeDetected
      lastUpdated: "2024-04-25T20:32:06Z"
      updates: []

Editing the HostFirmwareComponents resource

You can edit the HostFirmwareComponents resource of a node.

Procedure
  1. Get the detailed list of HostFirmwareComponents resources:

    $ oc get hostfirmwarecomponents -n openshift-machine-api -o yaml
  2. Edit a host’s HostFirmwareComponents resource:

    $ oc edit <host_name> hostfirmwarecomponents -n openshift-machine-api (1)
    1 Where <host_name> is the name of the host. The HostFirmwareComponents resource will open in the default editor for your terminal.
    Example output
    ---
    apiVersion: metal3.io/v1alpha1
    kind: HostFirmwareComponents
    metadata:
      creationTimestamp: 2024-04-25T20:32:06Z"
      generation: 1
      name: ostest-master-2
      namespace: openshift-machine-api
      ownerReferences:
      - apiVersion: metal3.io/v1alpha1
        blockOwnerDeletion: true
        controller: true
        kind: BareMetalHost
        name: ostest-master-2
        uid: 16022566-7850-4dc8-9e7d-f216211d4195
      resourceVersion: "2437"
      uid: 2038d63f-afc0-4413-8ffe-2f8e098d1f6c
    spec:
      updates:
        - name: bios (1)
          url: https://myurl.with.firmware.for.bios (2)
        - name: bmc (3)
          url: https://myurl.with.firmware.for.bmc (4)
    status:
      components:
      - component: bios
        currentVersion: 1.0.0
        initialVersion: 1.0.0
      - component: bmc
        currentVersion: "1.00"
        initialVersion: "1.00"
      conditions:
      - lastTransitionTime: "2024-04-25T20:32:06Z"
        message: ""
        observedGeneration: 1
        reason: OK
        status: "True"
        type: Valid
      - lastTransitionTime: "2024-04-25T20:32:06Z"
        message: ""
        observedGeneration: 1
        reason: OK
        status: "False"
        type: ChangeDetected
      lastUpdated: "2024-04-25T20:32:06Z"
    1 To set a BIOS version, set the name attribute to bios.
    2 To set a BIOS version, set the url attribute to the URL for the firmware version of the BIOS.
    3 To set a BMC version, set the name attribute to bmc.
    4 To set a BMC version, set the url attribute to the URL for the firmware verison of the BMC.
  3. Save the changes and exit the editor.

  4. Get the host’s machine name:

    $ oc get bmh <host_name> -n openshift-machine name (1)
    1 Where <host_name> is the name of the host. The machine name appears under the CONSUMER field.
  5. Annotate the machine to delete it from the machine set:

    $ oc annotate machine <machine_name> machine.openshift.io/delete-machine=true -n openshift-machine-api (1)
    1 Where <machine_name> is the name of the machine to delete.
  6. Get a list of nodes and count the number of worker nodes:

    $ oc get nodes
  7. Get the machine set:

    $ oc get machinesets -n openshift-machine-api
  8. Scale the machine set:

    $ oc scale machineset <machineset_name> -n openshift-machine-api --replicas=<n-1> (1)
    1 Where <machineset_name> is the name of the machine set and <n-1> is the decremented number of worker nodes.
  9. When the host enters the Available state, scale up the machine set to make the HostFirmwareComponents resource changes take effect:

    $ oc scale machineset <machineset_name> -n openshift-machine-api --replicas=<n> (1)
    1 Where <machineset_name> is the name of the machine set and <n> is the number of worker nodes.