×

After successfully deploying an installer-provisioned cluster, consider the following post-installation procedures.

Optional: Configuring NTP for disconnected clusters

OpenShift Container Platform installs the chrony Network Time Protocol (NTP) service on the cluster nodes. Use the following procedure to configure NTP servers on the control plane nodes and configure worker nodes as NTP clients of the control plane nodes after a successful deployment.

Configuring NTP for disconnected clusters

OpenShift Container Platform nodes must agree on a date and time to run properly. When worker nodes retrieve the date and time from the NTP servers on the control plane nodes, it enables the installation and operation of clusters that are not connected to a routable network and thereby do not have access to a higher stratum NTP server.

Procedure
  1. Create a Butane config, 99-master-chrony-conf-override.bu, including the contents of the chrony.conf file for the control plane nodes.

    See "Creating machine configs with Butane" for information about Butane.

    Butane config example
    variant: openshift
    version: 4.11.0
    metadata:
      name: 99-master-chrony-conf-override
      labels:
        machineconfiguration.openshift.io/role: master
    storage:
      files:
        - path: /etc/chrony.conf
          mode: 0644
          overwrite: true
          contents:
            inline: |
              # Use public servers from the pool.ntp.org project.
              # Please consider joining the pool (https://www.pool.ntp.org/join.html).
    
              # The Machine Config Operator manages this file
              server openshift-master-0.<cluster-name>.<domain> iburst (1)
              server openshift-master-1.<cluster-name>.<domain> iburst
              server openshift-master-2.<cluster-name>.<domain> iburst
    
              stratumweight 0
              driftfile /var/lib/chrony/drift
              rtcsync
              makestep 10 3
              bindcmdaddress 127.0.0.1
              bindcmdaddress ::1
              keyfile /etc/chrony.keys
              commandkey 1
              generatecommandkey
              noclientlog
              logchange 0.5
              logdir /var/log/chrony
    
              # Configure the control plane nodes to serve as local NTP servers
              # for all worker nodes, even if they are not in sync with an
              # upstream NTP server.
    
              # Allow NTP client access from the local network.
              allow all
              # Serve time even if not synchronized to a time source.
              local stratum 3 orphan
    1 You must replace <cluster-name> with the name of the cluster and replace <domain> with the fully qualified domain name.
  2. Use Butane to generate a MachineConfig object file, 99-master-chrony-conf-override.yaml, containing the configuration to be delivered to the control plane nodes:

    $ butane 99-master-chrony-conf-override.bu -o 99-master-chrony-conf-override.yaml
  3. Create a Butane config, 99-worker-chrony-conf-override.bu, including the contents of the chrony.conf file for the worker nodes that references the NTP servers on the control plane nodes.

    Butane config example
    variant: openshift
    version: 4.11.0
    metadata:
      name: 99-worker-chrony-conf-override
      labels:
        machineconfiguration.openshift.io/role: worker
    storage:
      files:
        - path: /etc/chrony.conf
          mode: 0644
          overwrite: true
          contents:
            inline: |
              # The Machine Config Operator manages this file.
              server openshift-master-0.<cluster-name>.<domain> iburst (1)
              server openshift-master-1.<cluster-name>.<domain> iburst
              server openshift-master-2.<cluster-name>.<domain> iburst
    
              stratumweight 0
              driftfile /var/lib/chrony/drift
              rtcsync
              makestep 10 3
              bindcmdaddress 127.0.0.1
              bindcmdaddress ::1
              keyfile /etc/chrony.keys
              commandkey 1
              generatecommandkey
              noclientlog
              logchange 0.5
              logdir /var/log/chrony
    1 You must replace <cluster-name> with the name of the cluster and replace <domain> with the fully qualified domain name.
  4. Use Butane to generate a MachineConfig object file, 99-worker-chrony-conf-override.yaml, containing the configuration to be delivered to the worker nodes:

    $ butane 99-worker-chrony-conf-override.bu -o 99-worker-chrony-conf-override.yaml
  5. Apply the 99-master-chrony-conf-override.yaml policy to the control plane nodes.

    $ oc apply -f 99-master-chrony-conf-override.yaml
    Example output
    machineconfig.machineconfiguration.openshift.io/99-master-chrony-conf-override created
  6. Apply the 99-worker-chrony-conf-override.yaml policy to the worker nodes.

    $ oc apply -f 99-worker-chrony-conf-override.yaml
    Example output
    machineconfig.machineconfiguration.openshift.io/99-worker-chrony-conf-override created
  7. Check the status of the applied NTP settings.

    $ oc describe machineconfigpool

Enabling a provisioning network after installation

The assisted installer and installer-provisioned installation for bare metal clusters provide the ability to deploy a cluster without a provisioning network. This capability is for scenarios such as proof-of-concept clusters or deploying exclusively with Redfish virtual media when each node’s baseboard management controller is routable via the baremetal network.

You can enable a provisioning network after installation using the Cluster Baremetal Operator (CBO).

Prerequisites
  • A dedicated physical network must exist, connected to all worker and control plane nodes.

  • You must isolate the native, untagged physical network.

  • The network cannot have a DHCP server when the provisioningNetwork configuration setting is set to Managed.

  • You can omit the provisioningInterface setting in OpenShift Container Platform 4.10 to use the bootMACAddress configuration setting.

Procedure
  1. When setting the provisioningInterface setting, first identify the provisioning interface name for the cluster nodes. For example, eth0 or eno1.

  2. Enable the Preboot eXecution Environment (PXE) on the provisioning network interface of the cluster nodes.

  3. Retrieve the current state of the provisioning network and save it to a provisioning custom resource (CR) file:

    $ oc get provisioning -o yaml > enable-provisioning-nw.yaml
  4. Modify the provisioning CR file:

    $ vim ~/enable-provisioning-nw.yaml

    Scroll down to the provisioningNetwork configuration setting and change it from Disabled to Managed. Then, add the provisioningIP, provisioningNetworkCIDR, provisioningDHCPRange, provisioningInterface, and watchAllNameSpaces configuration settings after the provisioningNetwork setting. Provide appropriate values for each setting.

    apiVersion: v1
    items:
    - apiVersion: metal3.io/v1alpha1
      kind: Provisioning
      metadata:
        name: provisioning-configuration
      spec:
        provisioningNetwork: (1)
        provisioningIP: (2)
        provisioningNetworkCIDR: (3)
        provisioningDHCPRange: (4)
        provisioningInterface: (5)
        watchAllNameSpaces: (6)
    1 The provisioningNetwork is one of Managed, Unmanaged, or Disabled. When set to Managed, Metal3 manages the provisioning network and the CBO deploys the Metal3 pod with a configured DHCP server. When set to Unmanaged, the system administrator configures the DHCP server manually.
    2 The provisioningIP is the static IP addres