×

OpenShift Container Platform runs on RHCOS. You can follow these procedures to troubleshoot problems related to the operating system.

Investigating kernel crashes

The kdump service, included in kexec-tools, provides a crash-dumping mechanism. You can use this service to save the contents of the system’s memory for later analysis.

The kdump service is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.

/ Module included in the following assemblies:

Enabling kdump

RHCOS ships with the kexec-tools package, but manual configuration is required to enable the kdump service.

Procedure

Perform the following steps to enable kdump on RHCOS:

  1. To reserve memory for the crash kernel during the first kernel booting, provide kernel arguments by entering the following command:

    # rpm-ostree kargs --append='crashkernel=256M'
  2. Optional: To write the crash dump over the network or to some other location, rather than to the default local /var/crash location, edit the /etc/kdump.conf configuration file.

    Network dumps are required when using LUKS. kdump does not support local crash dumps on LUKS-encrypted devices.

    For details on configuring the kdump service, see the comments in /etc/sysconfig/kdump, /etc/kdump.conf, and the kdump.conf manual page. Also refer to the RHEL kdump documentation for further information on configuring the dump target.

  3. Enable the kdump systemd service:

    # systemctl enable kdump.service
  4. Reboot your system:

    # systemctl reboot
  5. Ensure that kdump has loaded a crash kernel by checking that the kdump.service has started and exited successfully and that cat /sys/kernel/kexec_crash_loaded prints 1.

Enabling kdump on day-1

The kdump service is intended to be enabled per node to debug kernel problems. Because there are costs to having kdump enabled, and these costs accumulate with each additional kdump-enabled node, it is recommended that kdump only be enabled on each node as needed. Potential costs of enabling kdump on each node include:

  • Less available RAM due to memory being reserved for the crash kernel.

  • Node unavailability while the kernel is dumping the core.

  • Additional storage space being used to store the crash dumps.

  • Not being production-ready because the kdump service is in Technology Preview.

If you are aware of the downsides and trade-offs of having the kdump service enabled, it is possible to enable kdump in a cluster-wide fashion. Although machine-specific machine configs are not yet supported, you can perform the previous steps through a systemd unit in a MachineConfig object on day-1 and have kdump enabled on all nodes in the cluster. You can create a MachineConfig object and inject that object into the set of manifest files used by Ignition during cluster setup. See "Customizing nodes" in the Installing → Installation configuration section for more information and examples on how to use Ignition configs.

Procedure

Create a MachineConfig object for cluster-wide configuration:

  1. Optional: If you change the /etc/kdump.conf configuration from the default, you can encode it into base64 format to include its content in your MachineConfig object:

    $ cat << EOF | base64
    path /var/crash
    core_collector makedumpfile -l --message-level 7 -d 31
    EOF
  2. Optional: Create a content of the /etc/sysconfig/kdump file and encode it as base64 if you change the configuration from the default:

    $ cat << EOF | base64
    KDUMP_COMMANDLINE_REMOVE="hugepages hugepagesz slub_debug quiet log_buf_len swiotlb"
    KDUMP_COMMANDLINE_APPEND="irqpoll nr_cpus=1 reset_devices cgroup_disable=memory mce=off numa=off udev.children-max=2 panic=10 rootflags=nofail acpi_no_memhotplug transparent_hugepage=never nokaslr novmcoredd hest_disable"
    KEXEC_ARGS="-s"
    KDUMP_IMG="vmlinuz"
    EOF
  3. Create the MachineConfig object file:

    $ cat << EOF > ./99-master-kdump-configuration.yaml
    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      labels:
        machineconfiguration.openshift.io/role: master (1)
      name: 99-master-kdump-configuration
    spec:
      kernelArguments:
        - 'crashkernel=256M' (2)
      config:
        ignition:
          version: 3.2.0
        storage:
          files:
            - contents:
                source: data:text/plain;charset=utf-8;base64,ICAgIHBhdGggL3Zhci9jcmFzaAogICAgY2...  (3)
              mode: 420
              overwrite: true
              path: /etc/kdump.conf
            - contents:
                source: data:text/plain;charset=utf-8;base64,S0RVTVBfQ09NTUFORExJTkVfUkVNT1ZFPS...  (4)
              mode: 420
              overwrite: true
              path: /etc/sysconfig/kdump
        systemd:
          units:
            - enabled: true
              name: kdump.service
    EOF
    1 Replace master with worker for creating a MachineConfig object for the worker role.
    2 Provide kernel arguments to reserve memory for the crash kernel. You can add other kernel arguments if necessary.
    3 Replace the base64 content with the one you created for /etc/kdump.conf.
    4 Replace the base64 content with the one you created for /etc/sysconfig/kdump.
  4. Put the YAML file into manifests during cluster setup. You can also create this MachineConfig object after cluster setup with the YAML file:

    $ oc create -f ./99-master-kdump-configuration.yaml

Testing the kdump configuration

See the Testing the kdump configuration section in the RHEL documentation for kdump.

Analyzing a core dump

See the Analyzing a core dump section in the RHEL documentation for kdump.

It is recommended to perform vmcore analysis on a separate RHEL system.

Additional resources