×

Determining where installation issues occur

When troubleshooting OpenShift Container Platform installation issues, you can monitor installation logs to determine at which stage issues occur. Then, retrieve diagnostic data relevant to that stage.

OpenShift Container Platform installation proceeds through the following stages:

  1. Ignition configuration files are created.

  2. The bootstrap machine boots and starts hosting the remote resources required for the control plane machines to boot.

  3. The control plane machines fetch the remote resources from the bootstrap machine and finish booting.

  4. The control plane machines use the bootstrap machine to form an etcd cluster.

  5. The bootstrap machine starts a temporary Kubernetes control plane using the new etcd cluster.

  6. The temporary control plane schedules the production control plane to the control plane machines.

  7. The temporary control plane shuts down and passes control to the production control plane.

  8. The bootstrap machine adds OpenShift Container Platform components into the production control plane.

  9. The installation program shuts down the bootstrap machine.

  10. The control plane sets up the worker nodes.

  11. The control plane installs additional services in the form of a set of Operators.

  12. The cluster downloads and configures remaining components needed for the day-to-day operation, including the creation of worker machines in supported environments.

User-provisioned infrastructure installation considerations

The default installation method uses installer-provisioned infrastructure. With installer-provisioned infrastructure clusters, OpenShift Container Platform manages all aspects of the cluster, including the operating system itself. If possible, use this feature to avoid having to provision and maintain the cluster infrastructure.

You can alternatively install OpenShift Container Platform 4.11 on infrastructure that you provide. If you use this installation method, follow user-provisioned infrastructure installation documentation carefully. Additionally, review the following considerations before the installation:

  • Check the Red Hat Enterprise Linux (RHEL) Ecosystem to determine the level of Red Hat Enterprise Linux CoreOS (RHCOS) support provided for your chosen server hardware or virtualization technology.

  • Many virtualization and cloud environments require agents to be installed on guest operating systems. Ensure that these agents are installed as a containerized workload deployed through a daemon set.

  • Install cloud provider integration if you want to enable features such as dynamic storage, on-demand service routing, node hostname to Kubernetes hostname resolution, and cluster autoscaling.

    It is not possible to enable cloud provider integration in OpenShift Container Platform environments that mix resources from different cloud providers, or that span multiple physical or virtual platforms. The node life cycle controller will not allow nodes that are external to the existing provider to be added to a cluster, and it is not possible to specify more than one cloud provider integration.

  • A provider-specific Machine API implementation is required if you want to use machine sets or autoscaling to automatically provision OpenShift Container Platform cluster nodes.

  • Check whether your chosen cloud provider offers a method to inject Ignition configuration files into hosts as part of their initial deployment. If they do not, you will need to host Ignition configuration files by using an HTTP server. The steps taken to troubleshoot Ignition configuration file issues will differ depending on which of these two methods is deployed.

  • Storage needs to be manually provisioned if you want to leverage optional framework components such as the embedded container registry, Elasticsearch, or Prometheus. Default storage classes are not defined in user-provisioned infrastructure installations unless explicitly configured.

  • A load balancer is required to distribute API requests across all control plane nodes in highly available OpenShift Container Platform environments. You can use any TCP-based load balancing solution that meets OpenShift Container Platform DNS routing and port requirements.

Checking a load balancer configuration before OpenShift Container Platform installation

Check your load balancer configuration prior to starting an OpenShift Container Platform installation.

Prerequisites
  • You have configured an external load balancer of your choosing, in preparation for an OpenShift Container Platform installation. The following example is based on a Red Hat Enterprise Linux (RHEL) host using HAProxy to provide load balancing services to a cluster.

  • You have configured DNS in preparation for an OpenShift Container Platform installation.

  • You have SSH access to your load balancer.

Procedure
  1. Check that the haproxy systemd service is active:

    $ ssh <user_name>@<load_balancer> systemctl status haproxy
  2. Verify that the load balancer is listening on the required ports. The following example references ports 80, 443, 6443, and 22623.

    • For HAProxy instances running on Red Hat Enterprise Linux (RHEL) 6, verify port status by using the netstat command:

      $ ssh <user_name>@<load_balancer> netstat -nltupe | grep -E ':80|:443|:6443|:22623'
    • For HAProxy instances running on Red Hat Enterprise Linux (RHEL) 7 or 8, verify port status by using the ss command:

      $ ssh <user_name>@<load_balancer> ss -nltupe | grep -E ':80|:443|:6443|:22623'

      Red Hat recommends the ss command instead of netstat in Red Hat Enterprise Linux (RHEL) 7 or later. ss is provided by the iproute package. For more information on the ss command, see the Red Hat Enterprise Linux (RHEL) 7 Performance Tuning Guide.

  3. Check that the wildcard DNS record resolves to the load balancer:

    $ dig <wildcard_fqdn> @<dns_server>

Specifying OpenShift Container Platform installer log levels

By default, the OpenShift Container Platform installer log level is set to info. If more detailed logging is required when diagnosing a failed OpenShift Container Platform installation, you can increase the openshift-install log level to debug when starting the installation again.

Prerequisites
  • You have access to the installation host.

Procedure
  • Set the installation log level to debug when initiating the installation:

    $ ./openshift-install --dir <installation_directory> wait-for bootstrap-complete --log-level debug  (1)
    1 Possible log levels include info, warn, error, and debug.

Troubleshooting openshift-install command issues

If you experience issues running the openshift-install command, check the following:

  • The installation has been initiated within 24 hours of Ignition configuration file creation. The Ignition files are created when the following command is run:

    $ ./openshift-install create ignition-configs --dir=./install_dir
  • The install-config.yaml file is in the same directory as the installer. If an alternative installation path is declared by using the ./openshift-install --dir option, verify that the install-config.yaml file exists within that directory.

Monitoring installation progress

You can monitor high-level installation, bootstrap, and control plane logs as an OpenShift Container Platform installation progresses. This provides greater visibility into how an installation progresses and helps identify the stage at which an installation failure occurs.

Prerequisites
  • You have access to the cluster as a user with the cluster-admin role.

  • You have installed the OpenShift CLI (oc).

  • You have SSH access to your hosts.

  • You have the fully qualified domain names of the bootstrap and control plane nodes.

    The initial kubeadmin password can be found in <install_directory>/auth/kubeadmin-password on the installation host.

Procedure
  1. Watch the installation log as the installation progresses:

    $ tail -f ~/<installation_directory>/.openshift_install.log
  2. Monitor the bootkube.service journald unit log on the bootstrap node, after it has booted. This provides visibility into the bootstrapping of the first control plane. Replace <bootstrap_fqdn> with the bootstrap node’s fully qualified domain name:

    $ ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service

    The bootkube.service log on the bootstrap node outputs etcd connection refused errors, indicating that the bootstrap server is unable to connect to etcd on control plane nodes. After etcd has started on each control plane node and the nodes have joined the cluster, the errors should stop.

  3. Monitor kubelet.service journald unit logs on control plane nodes, after they have booted. This provides visibility into control plane node agent activity.

    1. Monitor the logs using oc:

      $ oc adm node-logs --role=master -u kubelet
    2. If the API is not functional, review the logs using SSH instead. Replace <master-node>.<cluster_name>.<base_domain> with appropriate values:

      $ ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service
  4. Monitor crio.service journald unit logs on control plane nodes, after they have booted. This provides visibility into control plane node CRI-O container runtime activity.

    1. Monitor the logs using oc:

      $ oc adm node-logs --role=master -u crio
    2. If the API is not functional, review the logs using SSH instead. Replace <master-node>.<cluster_name>.<base_domain> with appropriate values:

      $ ssh core@master-N.cluster_name.sub_domain.domain journalctl -b -f -u crio.service

Gathering bootstrap node diagnostic data

When experiencing bootstrap-related issues, you can gather bootkube.service journald unit logs and container logs from the bootstrap node.

Prerequisites
  • You have SSH access to your bootstrap node.

  • You have the fully qualified domain name of the bootstrap node.

  • If you are hosting Ignition configuration files by using an HTTP server, you must have the HTTP server’s fully qualified domain name and the port number. You must also have SSH access to the HTTP host.

Procedure
  1. If you have access to the bootstrap node’s console, monitor the console until the node reaches the login prompt.

  2. Verify the Ignition file configuration.

    • If you are hosting Ignition configuration files by using an HTTP server.

      1. Verify the bootstrap node Ignition file URL. Replace <http_server_fqdn> with HTTP server’s fully qualified domain name:

        $ curl -I http://<http_server_fqdn>:<port>/bootstrap.ign  (1)
        1 The -I option returns the header only. If the Ignition file is available on the specified URL, the command returns 200 OK status. If it is not available, the command returns 404 file not found.
      2. To verify that the Ignition file was received by the bootstrap node, query the HTTP server logs on the serving host. For example, if you are using an Apache web server to serve Ignition files, enter the following command:

        $ grep -is 'bootstrap.ign' /var/log/httpd/access_log

        If the bootstrap Ignition file is received, the associated HTTP GET log message will include a 200 OK success status, indicating that the request succeeded.

      3. If the Ignition file was not received, check that the Ignition files exist and that they have the appropriate file and web server permissions on the serving host directly.

    • If you are using a cloud provider mechanism to inject Ignition configuration files into hosts as part of their initial deployment.

      1. Review the bootstrap node’s console to determine if the mechanism is injecting the bootstrap node Ignition file correctly.

  3. Verify the availability of the bootstrap node’s assigned storage device.

  4. Verify that the bootstrap node has been assigned an IP address from the DHCP server.

  5. Collect bootkube.service journald unit logs from the bootstrap node. Replace <bootstrap_fqdn> with the bootstrap node’s fully qualified domain name:

    $ ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service

    The bootkube.service log on the bootstrap node outputs etcd connection refused errors, indicating that the bootstrap server is unable to connect to etcd on control plane nodes. After etcd has started on each control plane node and the nodes have joined the cluster, the errors should stop.

  6. Collect logs from the bootstrap node containers.

    1. Collect the logs using podman on the bootstrap node. Replace <bootstrap_fqdn> with the bootstrap node’s fully qualified domain name:

      $ ssh core@<bootstrap_fqdn> 'for pod in $(sudo podman ps -a -q); do sudo podman logs $pod; done'
  7. If the bootstrap process fails, verify the following.

    • You can resolve api.<cluster_name>.<base_domain> from the installation host.

    • The load balancer proxies port 6443 connections to bootstrap and control plane nodes. Ensure that the proxy configuration meets OpenShift Container Platform installation requirements.

Investigating control plane node installation issues

If you experience control plane node installation issues, determine the