×

About the must-gather tool

The oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues, including:

  • Resource definitions

  • Service logs

By default, the oc adm must-gather command uses the default plugin image and writes into ./must-gather.local.

Alternatively, you can collect specific information by running the command with the appropriate arguments as described in the following sections:

  • To collect data related to one or more specific features, use the --image argument with an image, as listed in a following section.

    For example:

    $ oc adm must-gather \
      --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.17.0
  • To collect the audit logs, use the -- /usr/bin/gather_audit_logs argument, as described in a following section.

    For example:

    $ oc adm must-gather -- /usr/bin/gather_audit_logs

    Audit logs are not collected as part of the default set of information to reduce the size of the files.

When you run oc adm must-gather, a new pod with a random name is created in a new project on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local in the current working directory.

For example:

NAMESPACE                      NAME                 READY   STATUS      RESTARTS      AGE
...
openshift-must-gather-5drcj    must-gather-bklx4    2/2     Running     0             72s
openshift-must-gather-5drcj    must-gather-s8sdh    2/2     Running     0             72s
...

Optionally, you can run the oc adm must-gather command in a specific namespace by using the --run-namespace option.

For example:

$ oc adm must-gather --run-namespace <namespace> \
  --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.17.0

Gathering data about your cluster for Red Hat Support

You can gather debugging information about your cluster by using the oc adm must-gather CLI command.

Prerequisites
  • You have access to the cluster as a user with the cluster-admin role.

  • The OpenShift Container Platform CLI (oc) is installed.

Procedure
  1. Navigate to the directory where you want to store the must-gather data.

    If your cluster is in a disconnected environment, you must take additional steps. If your mirror registry has a trusted CA, you must first add the trusted CA to the cluster. For all clusters in disconnected environments, you must import the default must-gather image as an image stream.

    $ oc import-image is/must-gather -n openshift
  2. Run the oc adm must-gather command:

    $ oc adm must-gather

    If you are in a disconnected environment, use the --image flag as part of must-gather and point to the payload image.

    Because this command picks a random control plane node by default, the pod might be scheduled to a control plane node that is in the NotReady and SchedulingDisabled state.

    1. If this command fails, for example, if you cannot schedule a pod on your cluster, then use the oc adm inspect command to gather information for particular resources.

      Contact Red Hat Support for the recommended resources to gather.

  3. Create a compressed file from the must-gather directory that was just created in your working directory. For example, on a computer that uses a Linux operating system, run the following command:

    $ tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ (1)
    1 Make sure to replace must-gather-local.5421342344627712289/ with the actual directory name.
  4. Attach the compressed file to your support case on the the Customer Support page of the Red Hat Customer Portal.

Must-gather flags

The flags listed in the following table are available to use with the oc adm must-gather command.

Table 1. OpenShift Container Platform flags for oc adm must-gather
Flag Example command Description

--all-images

oc adm must-gather --all-images=false

Collect must-gather data using the default image for all Operators on the cluster that are annotated with operators.openshift.io/must-gather-image.

--dest-dir

oc adm must-gather --dest-dir='<directory_name>'

Set a specific directory on the local machine where the gathered data is written.

--host-network

oc adm must-gather --host-network=false

Run must-gather pods as hostNetwork: true. Relevant if a specific command and image needs to capture host-level data.

--image

oc adm must-gather --image=[<plugin_image>]

Specify a must-gather plugin image to run. If not specified, OpenShift Container Platform’s default must-gather image is used.

--image-stream

oc adm must-gather --image-stream=[<image_stream>]

Specify an`<image_stream>` using a namespace or name:tag value containing a must-gather plugin image to run.

--node-name

oc adm must-gather --node-name='<node>'

Set a specific node to use. If not specified, by default a random master is used.

--node-selector

oc adm must-gather --node-selector='<node_selector_name>'

Set a specific node selector to use. Only relevant when specifying a command and image which needs to capture data on a set of cluster nodes simultaneously.

--run-namespace

oc adm must-gather --run-namespace='<namespace>'

An existing privileged namespace where must-gather pods should run. If not specified, a temporary namespace is generated.

--since

oc adm must-gather --since=<time>

Only return logs newer than the specified duration. Defaults to all logs. Plugins are encouraged but not required to support this. Only one since-time or since may be used.

--since-time

oc adm must-gather --since-time='<date_and_time>'

Only return logs after a specific date and time, expressed in (RFC3339) format. Defaults to all logs. Plugins are encouraged but not required to support this. Only one since-time or since may be used.

--source-dir

oc adm must-gather --source-dir='/<directory_name>/'

Set the specific directory on the pod where you copy the gathered data from.

--timeout

oc adm must-gather --timeout='<time>'

The length of time to gather data before timing out, expressed as seconds, minutes, or hours, for example, 3s, 5m, or 2h. Time specified must be higher than zero. Defaults to 10 minutes if not specified.

--volume-percentage

oc adm must-gather --volume-percentage=<percent>

Specify maximum percentage of pod’s allocated volume that can be used for must-gather. If this limit is exceeded, must-gather stops gathering, but still copies gathered data. Defaults to 30% if not specified.

Gathering data about specific features

You can gather debugging information about specific features by using the oc adm must-gather CLI command with the --image or --image-stream argument. The must-gather tool supports multiple images, so you can gather data about more than one feature by running a single command.

Table 2. Supported must-gather images
Image Purpose

registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.17.0

Data collection for OpenShift Virtualization.

registry.redhat.io/openshift-serverless-1/svls-must-gather-rhel8

Data collection for OpenShift Serverless.

registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8:<installed_version_service_mesh>

Data collection for Red Hat OpenShift Service Mesh.

registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v<installed_version_migration_toolkit>

Data collection for the Migration Toolkit for Containers.

registry.redhat.io/odf4/odf-must-gather-rhel9:v<installed_version_ODF>

Data collection for Red Hat OpenShift Data Foundation.

registry.redhat.io/openshift-logging/cluster-logging-rhel9-operator:v<installed_version_logging>

Data collection for logging.

quay.io/netobserv/must-gather

Data collection for the Network Observability Operator.

registry.redhat.io/openshift4/ose-csi-driver-shared-resource-mustgather-rhel8

Data collection for OpenShift Shared Resource CSI Driver.

registry.redhat.io/openshift4/ose-local-storage-mustgather-rhel9:v<installed_version_LSO>

Data collection for Local Storage Operator.

registry.redhat.io/openshift-sandboxed-containers/osc-must-gather-rhel8:v<installed_version_sandboxed_containers>

Data collection for {sandboxed-containers-first}.

registry.redhat.io/workload-availability/node-healthcheck-must-gather-rhel8:v<installed-version-NHC>

Data collection for the Red Hat Workload Availability Operators, including the Self Node Remediation (SNR) Operator, the Fence Agents Remediation (FAR) Operator, the Machine Deletion Remediation (MDR) Operator, the Node Health Check Operator (NHC) Operator, and the Node Maintenance Operator (NMO) Operator.

registry.redhat.io/numaresources/numaresources-must-gather-rhel9:v<installed-version-nro>

Data collection for the NUMA Resources Operator (NRO).

registry.redhat.io/openshift4/ptp-must-gather-rhel8:v<installed-version-ptp>

Data collection for the PTP Operator.

registry.redhat.io/openshift-gitops-1/must-gather-rhel8:v<installed_version_GitOps>

Data collection for Red Hat OpenShift GitOps.

registry.redhat.io/openshift4/ose-secrets-store-csi-mustgather-rhel8:v<installed_version_secret_store>

Data collection for the Secrets Store CSI Driver Operator.

registry.redhat.io/lvms4/lvms-must-gather-rhel9:v<installed_version_LVMS>

Data collection for the LVM Operator.

ghcr.io/complianceascode/must-gather-ocp

Data collection for the Compliance Operator.

To determine the latest version for an OpenShift Container Platform component’s image, see the OpenShift Operator Life Cycles web page on the Red Hat Customer Portal.

Prerequisites
  • You have access to the cluster as a user with the cluster-admin role.

  • The OpenShift Container Platform CLI (oc) is installed.

Procedure
  1. Navigate to the directory where you want to store the must-gather data.

  2. Run the oc adm must-gather command with one or more --image or --image-stream arguments.

    • To collect the default must-gather data in addition to specific feature data, add the --image-stream=openshift/must-gather argument.

    • For information on gathering data about the Custom Metrics Autoscaler, see the Additional resources section that follows.

    For example, the following command gathers both the default cluster data and information specific to OpenShift Virtualization:

    $ oc adm must-gather \
      --image-stream=openshift/must-gather \ (1)
      --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.17.0 (2)
    
    1 The default OpenShift Container Platform must-gather image
    2 The must-gather image for OpenShift Virtualization

    You can use the must-gather tool with additional arguments to gather data that is specifically related to OpenShift Logging and the Red Hat OpenShift Logging Operator in your cluster. For OpenShift Logging, run the following command:

    $ oc adm must-gather --image=$(oc -n openshift-logging get deployment.apps/cluster-logging-operator \
      -o jsonpath='{.spec.template.spec.containers[?(@.name == "cluster-logging-operator")].image}')
    Example must-gather output for OpenShift Logging
    ├── cluster-logging
    │  ├── clo
    │  │  ├── cluster-logging-operator-74dd5994f-6ttgt
    │  │  ├── clusterlogforwarder_cr
    │  │  ├── cr
    │  │  ├── csv
    │  │  ├── deployment
    │  │  └── logforwarding_cr
    │  ├── collector
    │  │  ├── fluentd-2tr64
    │  ├── eo
    │  │  ├── csv
    │  │  ├── deployment
    │  │  └── elasticsearch-operator-7dc7d97b9d-jb4r4
    │  ├── es
    │  │  ├── cluster-elasticsearch
    │  │  │  ├── aliases
    │  │  │  ├── health
    │  │  │  ├── indices
    │  │  │  ├── latest_documents.json
    │  │  │  ├── nodes
    │  │  │  ├── nodes_stats.json
    │  │  │  └── thread_pool
    │  │  ├── cr
    │  │  ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms
    │  │  └── logs
    │  │     ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms
    │  ├── install
    │  │  ├── co_logs
    │  │  ├── install_plan
    │  │  ├── olmo_logs
    │  │  └── subscription
    │  └── kibana
    │     ├── cr
    │     ├── kibana-9d69668d4-2rkvz
    ├── cluster-scoped-resources
    │  └── core
    │     ├── nodes
    │     │  ├── ip-10-0-146-180.eu-west-1.compute.internal.yaml
    │     └── persistentvolumes
    │        ├── pvc-0a8d65d9-54aa-4c44-9ecc-33d9381e41c1.yaml
    ├── event-filter.html
    ├── gather-debug.log
    └── namespaces
       ├── openshift-logging
       │  ├── apps
       │  │  ├── daemonsets.yaml
       │  │  ├── deployments.yaml
       │  │  ├── replicasets.yaml
       │  │  └── statefulsets.yaml
       │  ├── batch
       │  │  ├── cronjobs.yaml
       │  │  └── jobs.yaml
       │  ├── core
       │  │  ├── configmaps.yaml
       │  │  ├── endpoints.yaml
       │  │  ├── events
       │  │  │  ├── elasticsearch-im-app-1596020400-gm6nl.1626341a296c16a1.yaml
       │  │  │  ├── elasticsearch-im-audit-1596020400-9l9n4.1626341a2af81bbd.yaml
       │  │  │  ├── elasticsearch-im-infra-1596020400-v98tk.1626341a2d821069.yaml
       │  │  │  ├── elasticsearch-im-app-1596020400-cc5vc.1626341a3019b238.yaml
       │  │  │  ├── elasticsearch-im-audit-1596020400-s8d5s.1626341a31f7b315.yaml
       │  │  │  ├── elasticsearch-im-infra-1596020400-7mgv8.1626341a35ea59ed.yaml
       │  │  ├── events.yaml
       │  │  ├── persistentvolumeclaims.yaml
       │  │  ├── pods.yaml
       │  │  ├── replicationcontrollers.yaml
       │  │  ├── secrets.yaml
       │  │  └── services.yaml
       │  ├── openshift-logging.yaml
       │  ├── pods
       │  │  ├── cluster-logging-operator-74dd5994f-6ttgt
       │  │  │  ├── cluster-logging-operator
       │  │  │  │  └── cluster-logging-operator
       │  │  │  │     └── logs
       │  │  │  │        ├── current.log
       │  │  │  │        ├── previous.insecure.log
       │  │  │  │        └── previous.log
       │  │  │  └── cluster-logging-operator-74dd5994f-6ttgt.yaml
       │  │  ├── cluster-logging-operator-registry-6df49d7d4-mxxff
       │  │  │  ├── cluster-logging-operator-registry
       │  │  │  │  └── cluster-logging-operator-registry
       │  │  │  │     └── logs
       │  │  │  │        ├── current.log
       │  │  │  │        ├── previous.insecure.log
       │  │  │  │        └── previous.log
       │  │  │  ├── cluster-logging-operator-registry-6df49d7d4-mxxff.yaml
       │  │  │  └── mutate-csv-and-generate-sqlite-db
       │  │  │     └── mutate-csv-and-generate-sqlite-db
       │  │  │        └── logs
       │  │  │           ├── current.log
       │  │  │           ├── previous.insecure.log
       │  │  │           └── previous.log
       │  │  ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms
       │  │  ├── elasticsearch-im-app-1596030300-bpgcx
       │  │  │  ├── elasticsearch-im-app-1596030300-bpgcx.yaml
       │  │  │  └── indexmanagement
       │  │  │     └── indexmanagement
       │  │  │        └── logs
       │  │  │           ├── current.log
       │  │  │           ├── previous.insecure.log
       │  │  │           └── previous.log
       │  │  ├── fluentd-2tr64
       │  │  │  ├── fluentd
       │  │  │  │  └── fluentd
       │  │  │  │     └── logs
       │  │  │  │        ├── current.log
       │  │  │  │        ├── previous.insecure.log
       │  │  │  │        └── previous.log
       │  │  │  ├── fluentd-2tr64.yaml
       │  │  │  └── fluentd-init
       │  │  │     └── fluentd-init
       │  │  │        └── logs
       │  │  │           ├── current.log
       │  │  │           ├── previous.insecure.log
       │  │  │           └── previous.log
       │  │  ├── kibana-9d69668d4-2rkvz
       │  │  │  ├── kibana
       │  │  │  │  └── kibana
       │  │  │  │     └── logs
       │  │  │  │        ├── current.log
       │  │  │  │        ├── previous.insecure.log
       │  │  │  │        └── previous.log
       │  │  │  ├── kibana-9d69668d4-2rkvz.yaml
       │  │  │  └── kibana-proxy
       │  │  │     └── kibana-proxy
       │  │  │        └── logs
       │  │  │           ├── current.log
       │  │  │           ├── previous.insecure.log
       │  │  │           └── previous.log
       │  └── route.openshift.io
       │     └── routes.yaml
       └── openshift-operators-redhat
          ├── ...
  3. Run the oc adm must-gather command with one or more --image or --image-stream arguments. For example, the following command gathers both the default cluster data and information specific to KubeVirt:

    $ oc adm must-gather \
     --image-stream=openshift/must-gather \ (1)
     --image=quay.io/kubevirt/must-gather (2)
    
    1 The default OpenShift Container Platform must-gather image
    2 The must-gather image for KubeVirt
  4. Create a compressed file from the must-gather directory that was just created in your working directory. For example, on a computer that uses a Linux operating system, run the following command:

    $ tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ (1)
    1 Make sure to replace must-gather-local.5421342344627712289/ with the actual directory name.
  5. Attach the compressed file to your support case on the the Customer Support page of the Red Hat Customer Portal.

Additional resources

Gathering network logs

You can gather network logs on all nodes in a cluster.

Procedure
  1. Run the oc adm must-gather command with -- gather_network_logs:

    $ oc adm must-gather -- gather_network_logs

    By default, the must-gather tool collects the OVN nbdb and sbdb databases from all of the nodes in the cluster. Adding the -- gather_network_logs option to include additional logs that contain OVN-Kubernetes transactions for OVN nbdb database.

  2. Create a compressed file from the must-gather directory that was just created in your working directory. For example, on a computer that uses a Linux operating system, run the following command:

    $ tar cvaf must-gather.tar.gz must-gather.local.472290403699006248 (1)
    1 Replace must-gather-local.472290403699006248 with the actual directory name.
  3. Attach the compressed file to your support case on the the Customer Support page of the Red Hat Customer Portal.

Changing the must-gather storage limit

When using the oc adm must-gather command to collect data the default maximum storage for the information is 30% of the storage capacity of the container. After the 30% limit is reached the container is killed and the gathering process stops. Information already gathered is downloaded to your local storage. To run the must-gather command again, you need either a container with more storage capacity or to adjust the maximum volume percentage.

If the container reaches the storage limit, an error message similar to the following example is generated.

Example output
Disk usage exceeds the volume percentage of 30% for mounted directory. Exiting...
Prerequisites
  • You have access to the cluster as a user with the cluster-admin role.

  • The OpenShift CLI (oc) is installed.

Procedure
  • Run the oc adm must-gather command with the volume-percentage flag. The new value cannot exceed 100.

    $ oc adm must-gather --volume-percentage <storage_percentage>

Obtaining your cluster ID

When providing information to Red Hat Support, it is helpful to provide the unique identifier for your cluster. You can have your cluster ID autofilled by using the OpenShift Container Platform web console. You can also manually obtain your cluster ID by using the web console or the OpenShift CLI (oc).

Prerequisites
  • You have access to the cluster as a user with the cluster-admin role.

  • You have access to the web console or the OpenShift CLI (oc) installed.

Procedure
  • To open a support case and have your cluster ID autofilled using the web console:

    1. From the toolbar, navigate to (?) Help and select Share Feedback from the list.

    2. Click Open a support case from the Tell us about your experience window.

  • To manually obtain your cluster ID using the web console:

    1. Navigate to HomeOverview.

    2. The value is available in the Cluster ID field of the Details section.

  • To obtain your cluster ID using the OpenShift CLI (oc), run the following command:

    $ oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{"\n"}'

About sosreport

sosreport is a tool that collects configuration details, system information, and diagnostic data from Red Hat Enterprise Linux (RHEL) and Red Hat Enterprise Linux CoreOS (RHCOS) systems. sosreport provides a standardized way to collect diagnostic information relating to a node, which can then be provided to Red Hat Support for issue diagnosis.

In some support interactions, Red Hat Support may ask you to collect a sosreport archive for a specific OpenShift Container Platform node. For example, it might sometimes be necessary to review system logs or other node-specific data that is not included within the output of oc adm must-gather.

Generating a sosreport archive for an OpenShift Container Platform cluster node

The recommended way to generate a sosreport for an OpenShift Container Platform 4.17 cluster node is through a debug pod.

Prerequisites
  • You have access to the cluster as a user with the cluster-admin role.

  • You have SSH access to your hosts.

  • You have installed the OpenShift CLI (oc).

  • You have a Red Hat standard or premium Subscription.

  • You have a Red Hat Customer Portal account.

  • You have an existing Red Hat Support case ID.

Procedure
  1. Obtain a list of cluster nodes:

    $ oc get nodes
  2. Enter into a debug session on the target node. This step instantiates a debug pod called <node_name>-debug:

    $ oc debug node/my-cluster-node

    To enter into a debug session on the target node that is tainted with the NoExecute effect, add a toleration to a dummy namespace, and start the debug pod in the dummy namespace:

    $ oc new-project dummy
    $ oc patch namespace dummy --type=merge -p '{"metadata": {"annotations": { "scheduler.alpha.kubernetes.io/defaultTolerations": "[{\"operator\": \"Exists\"}]"}}}'
    $ oc debug node/my-cluster-node
  3. Set /host as the root directory within the debug shell. The debug pod mounts the host’s root file system in /host within the pod. By changing the root directory to /host, you can run binaries contained in the host’s executable paths:

    # chroot /host

    OpenShift Container Platform 4.17 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> instead.

  4. Start a toolbox container, which includes the required binaries and plugins to run sosreport:

    # toolbox

    If an existing toolbox pod is already running, the toolbox command outputs 'toolbox-' already exists. Trying to start…​. Remove the running toolbox container with podman rm toolbox- and spawn a new toolbox container, to avoid issues with sosreport plugins.

  5. Collect a sosreport archive.

    1. Run the sos report command to collect necessary troubleshooting data on crio and podman:

      # sos report -k crio.all=on -k crio.logs=on  -k podman.all=on -k podman.logs=on (1)
      1 -k enables you to define sosreport plugin parameters outside of the defaults.
    2. Optional: To include information on OVN-Kubernetes networking configurations from a node in your report, run the following command:

      # sos report --all-logs
    3. Press Enter when prompted, to continue.

    4. Provide the Red Hat Support case ID. sosreport adds the ID to the archive’s file name.

    5. The sosreport output provides the archive’s location and checksum. The following sample output references support case ID 01234567:

      Your sosreport has been generated and saved in:
        /host/var/tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz (1)
      
      The checksum is: 382ffc167510fd71b4f12a4f40b97a4e
      1 The sosreport archive’s file path is outside of the chroot environment because the toolbox container mounts the host’s root directory at /host.
  6. Provide the sosreport archive to Red Hat Support for analysis, using one of the following methods.

    • Upload the file to an existing Red Hat support case directly from an OpenShift Container Platform cluster.

      1. From within the toolbox container, run redhat-support-tool to attach the archive directly to an existing Red Hat support case. This example uses support case ID 01234567:

        # redhat-support-tool addattachment -c 01234567 /host/var/tmp/my-sosreport.tar.xz (1)
        1 The toolbox container mounts the host’s root directory at /host. Reference the absolute path from the toolbox container’s root directory, including /host/, when specifying files to upload through the redhat-support-tool command.
    • Upload the file to an existing Red Hat support case.

      1. Concatenate the sosreport archive by running the oc debug node/<node_name> command and redirect the output to a file. This command assumes you have exited the previous oc debug session:

        $ oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz' > /tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz (1)
        1 The debug container mounts the host’s root directory at /host. Reference the absolute path from the debug container’s root directory, including /host, when specifying target files for concatenation.

        OpenShift Container Platform 4.17 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Transferring a sosreport archive from a cluster node by using scp is not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to copy a sosreport archive from a node by running scp core@<node>.<cluster_name>.<base_domain>:<file_path> <local_path>.

      2. Navigate to an existing support case within the Customer Support page of the Red Hat Customer Portal.

      3. Select Attach files and follow the prompts to upload the file.

Querying bootstrap node journal logs

If you experience bootstrap-related issues, you can gather bootkube.service journald unit logs and container logs from the bootstrap node.

Prerequisites
  • You have SSH access to your bootstrap node.

  • You have the fully qualified domain name of the bootstrap node.

Procedure
  1. Query bootkube.service journald unit logs from a bootstrap node during OpenShift Container Platform installation. Replace <bootstrap_fqdn> with the bootstrap node’s fully qualified domain name:

    $ ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service

    The bootkube.service log on the bootstrap node outputs etcd connection refused errors, indicating that the bootstrap server is unable to connect to etcd on control plane nodes. After etcd has started on each control plane node and the nodes have joined the cluster, the errors should stop.

  2. Collect logs from the bootstrap node containers using podman on the bootstrap node. Replace <bootstrap_fqdn> with the bootstrap node’s fully qualified domain name:

    $ ssh core@<bootstrap_fqdn> 'for pod in $(sudo podman ps -a -q); do sudo podman logs $pod; done'

Querying cluster node journal logs

You can gather journald unit logs and other logs within /var/log on individual cluster nodes.

Prerequisites
  • You have access to the cluster as a user with the cluster-admin role.

  • You have installed the OpenShift CLI (oc).

  • Your API service is still functional.

  • You have SSH access to your hosts.

Procedure
  1. Query kubelet journald unit logs from OpenShift Container Platform cluster nodes. The following example queries control plane nodes only:

    $ oc adm node-logs --role=master -u kubelet  (1)
    1 Replace kubelet as appropriate to query other unit logs.
  2. Collect logs from specific subdirectories under /var/log/ on cluster nodes.

    1. Retrieve a list of logs contained within a /var/log/ subdirectory. The following example lists files in /var/log/openshift-apiserver/ on all control plane nodes:

      $ oc adm node-logs --role=master --path=openshift-apiserver
    2. Inspect a specific log within a /var/log/ subdirectory. The following example outputs /var/log/openshift-apiserver/audit.log contents from all control plane nodes:

      $ oc adm node-logs --role=master --path=openshift-apiserver/audit.log
    3. If the API is not functional, review the logs on each node using SSH instead. The following example tails /var/log/openshift-apiserver/audit.log:

      $ ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log

      OpenShift Container Platform 4.17 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. Before attempting to collect diagnostic data over SSH, review whether the data collected by running oc adm must gather and other oc commands is sufficient instead. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain>.

Network trace methods

Collecting network traces, in the form of packet capture records, can assist Red Hat Support with troubleshooting network issues.

OpenShift Container Platform supports two ways of performing a network trace. Review the following table and choose the method that meets your needs.

Table 3. Supported methods of collecting a network trace
Method Benefits and capabilities

Collecting a host network trace

You perform a packet capture for a duration that you specify on one or more nodes at the same time. The packet capture files are transferred from nodes to the client machine when the specified duration is met.

You can troubleshoot why a specific action triggers network communication issues. Run the packet capture, perform the action that triggers the issue, and use the logs to diagnose the issue.

Collecting a network trace from an OpenShift Container Platform node or container

You perform a packet capture on one node or one container. You run the tcpdump command interactively, so you can control the duration of the packet capture.

You can start the packet capture manually, trigger the network communication issue, and then stop the packet capture manually.

This method uses the cat command and shell redirection to copy the packet capture data from the node or container to the client machine.

Collecting a host network trace

Sometimes, troubleshooting a network-related issue is simplified by tracing network communication and capturing packets on multiple nodes at the same time.

You can use a combination of the oc adm must-gather command and the registry.redhat.io/openshift4/network-tools-rhel8 container image to gather packet captures from nodes. Analyzing packet captures can help you troubleshoot network communication issues.

The oc adm must-gather command is used to run the tcpdump command in pods on specific nodes. The tcpdump command records the packet captures in the pods. When the tcpdump command exits, the oc adm must-gather command transfers the files with the packet captures from the pods to your client machine.

The sample command in the following procedure demonstrates performing a packet capture with the tcpdump command. However, you can run any command in the container image that is specified in the --image argument to gather troubleshooting information from multiple nodes at the same time.

Prerequisites
  • You are logged in to OpenShift Container Platform as a user with the cluster-admin role.

  • You have installed the OpenShift CLI (oc).

Procedure
  1. Run a packet capture from the host network on some nodes by running the following command:

    $ oc adm must-gather \
        --dest-dir /tmp/captures \  (1)
        --source-dir '/tmp/tcpdump/' \  (2)
        --image registry.redhat.io/openshift4/network-tools-rhel8:latest \  (3)
        --node-selector 'node-role.kubernetes.io/worker' \  (4)
        --host-network=true \  (5)
        --timeout 30s \  (6)
        -- \
        tcpdump -i any \  (7)
        -w /tmp/tcpdump/%Y-%m-%dT%H:%M:%S.pcap -W 1 -G 300
    1 The --dest-dir argument specifies that oc adm must-gather stores the packet captures in directories that are relative to /tmp/captures on the client machine. You can specify any writable directory.
    2 When tcpdump is run in the debug pod that oc adm must-gather starts, the --source-dir argument specifies that the packet captures are temporarily stored in the /tmp/tcpdump directory on the pod.
    3 The --image argument specifies a container image that includes the tcpdump command.
    4 The --node-selector argument and example value specifies to perform the packet captures on the worker nodes. As an alternative, you can specify the --node-name argument instead to run the packet capture on a single node. If you omit both the --node-selector and the --node-name argument, the packet captures are performed on all nodes.
    5 The --host-network=true argument is required so that the packet captures are performed on the network interfaces of the node.
    6 The --timeout argument and value specify to run the debug pod for 30 seconds. If you do not specify the --timeout argument and a duration, the debug pod runs for 10 minutes.
    7 The -i any argument for the tcpdump command specifies to capture packets on all network interfaces. As an alternative, you can specify a network interface name.
  2. Perform the action, such as accessing a web application, that triggers the network communication issue while the network trace captures packets.

  3. Review the packet capture files that oc adm must-gather transferred from the pods to your client machine:

    tmp/captures
    ├── event-filter.html
    ├── ip-10-0-192-217-ec2-internal  (1)
    │   └── registry-redhat-io-openshift4-network-tools-rhel8-sha256-bca...
    │       └── 2022-01-13T19:31:31.pcap
    ├── ip-10-0-201-178-ec2-internal  (1)
    │   └── registry-redhat-io-openshift4-network-tools-rhel8-sha256-bca...
    │       └── 2022-01-13T19:31:30.pcap
    ├── ip-...
    └── timestamp
    1 The packet captures are stored in directories that identify the hostname, container, and file name. If you did not specify the --node-selector argument, then the directory level for the hostname is not present.

Collecting a network trace from an OpenShift Container Platform node or container

When investigating potential network-related OpenShift Container Platform issues, Red Hat Support might request a network packet trace from a specific OpenShift Container Platform cluster node or from a specific container. The recommended method to capture a network trace in OpenShift Container Platform is through a debug pod.

Prerequisites
  • You have access to the cluster as a user with the cluster-admin role.

  • You have installed the OpenShift CLI (oc).

  • You have an existing Red Hat Support case ID.

  • You have a Red Hat standard or premium Subscription.

  • You have a Red Hat Customer Portal account.

  • You have SSH access to your hosts.

Procedure
  1. Obtain a list of cluster nodes:

    $ oc get nodes
  2. Enter into a debug session on the target node. This step instantiates a debug pod called <node_name>-debug:

    $ oc debug node/my-cluster-node
  3. Set /host as the root directory within the debug shell. The debug pod mounts the host’s root file system in /host within the pod. By changing the root directory to /host, you can run binaries contained in the host’s executable paths:

    # chroot /host

    OpenShift Container Platform 4.17 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> instead.

  4. From within the chroot environment console, obtain the node’s interface names:

    # ip ad
  5. Start a toolbox container, which includes the required binaries and plugins to run sosreport:

    # toolbox

    If an existing toolbox pod is already running, the toolbox command outputs 'toolbox-' already exists. Trying to start…​. To avoid tcpdump issues, remove the running toolbox container with podman rm toolbox- and spawn a new toolbox container.

  6. Initiate a tcpdump session on the cluster node and redirect output to a capture file. This example uses ens5 as the interface name:

    $ tcpdump -nn -s 0 -i ens5 -w /host/var/tmp/my-cluster-node_$(date +%d_%m_%Y-%H_%M_%S-%Z).pcap  (1)
    1 The tcpdump capture file’s path is outside of the chroot environment because the toolbox container mounts the host’s root directory at /host.
  7. If a tcpdump capture is required for a specific container on the node, follow these steps.

    1. Determine the target container ID. The chroot host command precedes the crictl command in this step because the toolbox container mounts the host’s root directory at /host:

      # chroot /host crictl ps
    2. Determine the container’s process ID. In this example, the container ID is a7fe32346b120:

      # chroot /host crictl inspect --output yaml a7fe32346b120 | grep 'pid' | awk '{print $2}'
    3. Initiate a tcpdump session on the container and redirect output to a capture file. This example uses 49628 as the container’s process ID and ens5 as the interface name. The nsenter command enters the namespace of a target process and runs a command in its namespace. because the target process in this example is a container’s process ID, the tcpdump command is run in the container’s namespace from the host:

      # nsenter -n -t 49628 -- tcpdump -nn -i ens5 -w /host/var/tmp/my-cluster-node-my-container_$(date +%d_%m_%Y-%H_%M_%S-%Z).pcap  (1)
      1 The tcpdump capture file’s path is outside of the chroot environment because the toolbox container mounts the host’s root directory at /host.
  8. Provide the tcpdump capture file to Red Hat Support for analysis, using one of the following methods.

    • Upload the file to an existing Red Hat support case directly from an OpenShift Container Platform cluster.

      1. From within the toolbox container, run redhat-support-tool to attach the file directly to an existing Red Hat Support case. This example uses support case ID 01234567:

        # redhat-support-tool addattachment -c 01234567 /host/var/tmp/my-tcpdump-capture-file.pcap (1)
        1 The toolbox container mounts the host’s root directory at /host. Reference the absolute path from the toolbox container’s root directory, including /host/, when specifying files to upload through the redhat-support-tool command.
    • Upload the file to an existing Red Hat support case.

      1. Concatenate the sosreport archive by running the oc debug node/<node_name> command and redirect the output to a file. This command assumes you have exited the previous oc debug session:

        $ oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-tcpdump-capture-file.pcap' > /tmp/my-tcpdump-capture-file.pcap (1)
        1 The debug container mounts the host’s root directory at /host. Reference the absolute path from the debug container’s root directory, including /host, when specifying target files for concatenation.

        OpenShift Container Platform 4.17 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Transferring a tcpdump capture file from a cluster node by using scp is not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to copy a tcpdump capture file from a node by running scp core@<node>.<cluster_name>.<base_domain>:<file_path> <local_path>.

      2. Navigate to an existing support case within the Customer Support page of the Red Hat Customer Portal.

      3. Select Attach files and follow the prompts to upload the file.

Providing diagnostic data to Red Hat Support

When investigating OpenShift Container Platform issues, Red Hat Support might ask you to upload diagnostic data to a support case. Files can be uploaded to a support case through the Red Hat Customer Portal, or from an OpenShift Container Platform cluster directly by using the redhat-support-tool command.

Prerequisites
  • You have access to the cluster as a user with the cluster-admin role.

  • You have installed the OpenShift CLI (oc).

  • You have SSH access to your hosts.

  • You have a Red Hat standard or premium Subscription.

  • You have a Red Hat Customer Portal account.

  • You have an existing Red Hat Support case ID.

Procedure
  • Upload diagnostic data to an existing Red Hat support case through the Red Hat Customer Portal.

    1. Concatenate a diagnostic file contained on an OpenShift Container Platform node by using the oc debug node/<node_name> command and redirect the output to a file. The following example copies /host/var/tmp/my-diagnostic-data.tar.gz from a debug container to /var/tmp/my-diagnostic-data.tar.gz:

      $ oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-diagnostic-data.tar.gz' > /var/tmp/my-diagnostic-data.tar.gz (1)
      1 The debug container mounts the host’s root directory at /host. Reference the absolute path from the debug container’s root directory, including /host, when specifying target files for concatenation.

      OpenShift Container Platform 4.17 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Transferring files from a cluster node by using scp is not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to copy diagnostic files from a node by running scp core@<node>.<cluster_name>.<base_domain>:<file_path> <local_path>.

    2. Navigate to an existing support case within the Customer Support page of the Red Hat Customer Portal.

    3. Select Attach files and follow the prompts to upload the file.

  • Upload diagnostic data to an existing Red Hat support case directly from an OpenShift Container Platform cluster.

    1. Obtain a list of cluster nodes:

      $ oc get nodes
    2. Enter into a debug session on the target node. This step instantiates a debug pod called <node_name>-debug:

      $ oc debug node/my-cluster-node
    3. Set /host as the root directory within the debug shell. The debug pod mounts the host’s root file system in /host within the pod. By changing the root directory to /host, you can run binaries contained in the host’s executable paths:

      # chroot /host

      OpenShift Container Platform 4.17 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> instead.

    4. Start a toolbox container, which includes the required binaries to run redhat-support-tool:

      # toolbox

      If an existing toolbox pod is already running, the toolbox command outputs 'toolbox-' already exists. Trying to start…​. Remove the running toolbox container with podman rm toolbox- and spawn a new toolbox container, to avoid issues.

      1. Run redhat-support-tool to attach a file from the debug pod directly to an existing Red Hat Support case. This example uses support case ID '01234567' and example file path /host/var/tmp/my-diagnostic-data.tar.gz:

        # redhat-support-tool addattachment -c 01234567 /host/var/tmp/my-diagnostic-data.tar.gz (1)
        1 The toolbox container mounts the host’s root directory at /host. Reference the absolute path from the toolbox container’s root directory, including /host/, when specifying files to upload through the redhat-support-tool command.

About toolbox

toolbox is a tool that starts a container on a Red Hat Enterprise Linux CoreOS (RHCOS) system. The tool is primarily used to start a container that includes the required binaries and plugins that are needed to run commands such as sosreport and redhat-support-tool.

The primary purpose for a toolbox container is to gather diagnostic information and to provide it to Red Hat Support. However, if additional diagnostic tools are required, you can add RPM packages or run an image that is an alternative to the standard support tools image.

Installing packages to a toolbox container

By default, running the toolbox command starts a container with the registry.redhat.io/rhel8/support-tools:latest image. This image contains the most frequently used support tools. If you need to collect node-specific data that requires a support tool that is not part of the image, you can install additional packages.

Prerequisites
  • You have accessed a node with the oc debug node/<node_name> command.

Procedure
  1. Set /host as the root directory within the debug shell. The debug pod mounts the host’s root file system in /host within the pod. By changing the root directory to /host, you can run binaries contained in the host’s executable paths:

    # chroot /host
  2. Start the toolbox container:

    # toolbox
  3. Install the additional package, such as wget:

    # dnf install -y <package_name>

Starting an alternative image with toolbox

By default, running the toolbox command starts a container with the registry.redhat.io/rhel8/support-tools:latest image. You can start an alternative image by creating a .toolboxrc file and specifying the image to run.

Prerequisites
  • You have accessed a node with the oc debug node/<node_name> command.

Procedure
  1. Set /host as the root directory within the debug shell. The debug pod mounts the host’s root file system in /host within the pod. By changing the root directory to /host, you can run binaries contained in the host’s executable paths:

    # chroot /host
  2. Create a .toolboxrc file in the home directory for the root user ID:

    # vi ~/.toolboxrc
    REGISTRY=quay.io                (1)
    IMAGE=fedora/fedora:33-x86_64   (2)
    TOOLBOX_NAME=toolbox-fedora-33  (3)
    1 Optional: Specify an alternative container registry.
    2 Specify an alternative image to start.
    3 Optional: Specify an alternative name for the toolbox container.
  3. Start a toolbox container with the alternative image:

    # toolbox

    If an existing toolbox pod is already running, the toolbox command outputs 'toolbox-' already exists. Trying to start…​. Remove the running toolbox container with podman rm toolbox- and spawn a new toolbox container, to avoid issues with sosreport plugins.