×

For OpenShift Container Platform, Red Hat Enterprise Linux CoreOS (RHCOS) is the only supported operating system for control plane. Whereas, for node hosts, OpenShift Container Platform supports both RHCOS and Red Hat Enterprise Linux (RHEL). With Red Hat Advanced Cluster Security for Kubernetes (RHACS), you can scan RHCOS nodes for vulnerabilities and detect potential security threats.

RHACS scans RHCOS RPMs installed on the node host, as part of the RHCOS installation, for any known vulnerabilities.

First, RHACS analyzes and detects RHCOS components. Then it matches vulnerabilities for identified components by using RHEL and OpenShift 4.X Open Vulnerability and Assessment Language (OVAL) v2 security data streams.

  • If you installed RHACS by using the roxctl CLI, you must manually enable the RHCOS node scanning features. When you use Helm or Operator installation methods on OpenShift Container Platform, this feature is enabled by default.

Enabling RHCOS node scanning

If you use OpenShift Container Platform, you can enable scanning of Red Hat Enterprise Linux CoreOS (RHCOS) nodes for vulnerabilities by using Red Hat Advanced Cluster Security for Kubernetes (RHACS).

Prerequisites
Procedure
  1. Run one of the following commands to update the compliance container.

    • For a default compliance container with metrics disabled, run the following command:

      $ oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"containers":[{"name":"compliance","env":[{"name":"ROX_METRICS_PORT","value":"disabled"},{"name":"ROX_NODE_SCANNING_ENDPOINT","value":"127.0.0.1:8444"},{"name":"ROX_NODE_SCANNING_INTERVAL","value":"4h"},{"name":"ROX_NODE_SCANNING_INTERVAL_DEVIATION","value":"24m"},{"name":"ROX_NODE_SCANNING_MAX_INITIAL_WAIT","value":"5m"},{"name":"ROX_RHCOS_NODE_SCANNING","value":"true"},{"name":"ROX_CALL_NODE_INVENTORY_ENABLED","value":"true"}]}]}}}}'
    • For a compliance container with Prometheus metrics enabled, run the following command:

      $ oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"containers":[{"name":"compliance","env":[{"name":"ROX_METRICS_PORT","value":":9091"},{"name":"ROX_NODE_SCANNING_ENDPOINT","value":"127.0.0.1:8444"},{"name":"ROX_NODE_SCANNING_INTERVAL","value":"4h"},{"name":"ROX_NODE_SCANNING_INTERVAL_DEVIATION","value":"24m"},{"name":"ROX_NODE_SCANNING_MAX_INITIAL_WAIT","value":"5m"},{"name":"ROX_RHCOS_NODE_SCANNING","value":"true"},{"name":"ROX_CALL_NODE_INVENTORY_ENABLED","value":"true"}]}]}}}}'
  2. Update the Collector DaemonSet (DS) by taking the following steps:

    1. Add new volume mounts to Collector DS by running the following command:

      $ oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"volumes":[{"name":"tmp-volume","emptyDir":{}},{"name":"cache-volume","emptyDir":{"sizeLimit":"200Mi"}}]}}}}'
    2. Add the new NodeScanner container by running the following command:

      $ oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"containers":[{"command":["/scanner","--nodeinventory","--config=",""],"env":[{"name":"ROX_NODE_NAME","valueFrom":{"fieldRef":{"apiVersion":"v1","fieldPath":"spec.nodeName"}}},{"name":"ROX_CLAIR_V4_SCANNING","value":"true"},{"name":"ROX_COMPLIANCE_OPERATOR_INTEGRATION","value":"true"},{"name":"ROX_CSV_EXPORT","value":"false"},{"name":"ROX_DECLARATIVE_CONFIGURATION","value":"false"},{"name":"ROX_INTEGRATIONS_AS_CONFIG","value":"false"},{"name":"ROX_NETPOL_FIELDS","value":"true"},{"name":"ROX_NETWORK_DETECTION_BASELINE_SIMULATION","value":"true"},{"name":"ROX_NETWORK_GRAPH_PATTERNFLY","value":"true"},{"name":"ROX_NODE_SCANNING_CACHE_TIME","value":"3h36m"},{"name":"ROX_NODE_SCANNING_INITIAL_BACKOFF","value":"30s"},{"name":"ROX_NODE_SCANNING_MAX_BACKOFF","value":"5m"},{"name":"ROX_PROCESSES_LISTENING_ON_PORT","value":"false"},{"name":"ROX_QUAY_ROBOT_ACCOUNTS","value":"true"},{"name":"ROX_ROXCTL_NETPOL_GENERATE","value":"true"},{"name":"ROX_SOURCED_AUTOGENERATED_INTEGRATIONS","value":"false"},{"name":"ROX_SYSLOG_EXTRA_FIELDS","value":"true"},{"name":"ROX_SYSTEM_HEALTH_PF","value":"false"},{"name":"ROX_VULN_MGMT_WORKLOAD_CVES","value":"false"}],"image":"registry.redhat.io/advanced-cluster-security/rhacs-scanner-slim-rhel8:4.4.7","imagePullPolicy":"IfNotPresent","name":"node-inventory","ports":[{"containerPort":8444,"name":"grpc","protocol":"TCP"}],"volumeMounts":[{"mountPath":"/host","name":"host-root-ro","readOnly":true},{"mountPath":"/tmp/","name":"tmp-volume"},{"mountPath":"/cache","name":"cache-volume"}]}]}}}}'

Analysis and detection

When you use RHACS with OpenShift Container Platform, RHACS creates two coordinating containers for analysis and detection, the Compliance container and the Node-inventory container. The Compliance container was already a part of earlier RHACS versions. However, the Node-inventory container is new with RHACS 4.0 and works only with OpenShift Container Platform cluster nodes.

Upon start-up, the Compliance and Node-inventory containers begin the first inventory scan of Red Hat Enterprise Linux CoreOS (RHCOS) software components within five minutes. Next, the Node-inventory container scans the node’s file system to identify installed RPM packages and report on RHCOS software components. Afterward, inventory scanning occurs at periodic intervals, typically every four hours. You can customize the default interval by configuring the ROX_NODE_SCANNING_INTERVAL environment variable for the Compliance container.

Vulnerability matching

Central services, which include Central and Scanner, perform vulnerability matching. Scanner uses Red Hat’s Open Vulnerability and Assessment Language (OVAL) v2 security data streams to match vulnerabilities on Red Hat Enterprise Linux CoreOS (RHCOS) software components.

Unlike the earlier versions, RHACS 4.0 no longer uses the Kubernetes node metadata to find the kernel and container runtime versions. Instead, it uses the installed RHCOS RPMs to assess that information.

Related environment variables

You can use the following environment variables to configure RHCOS node scanning on RHACS.

Table 1. Node-inventory configuration
Environment Variable Description

ROX_NODE_SCANNING_CACHE_TIME

The time after which a cached inventory is considered outdated. Defaults to 90% of ROX_NODE_SCANNING_INTERVAL that is 3h36m.

ROX_NODE_SCANNING_INITIAL_BACKOFF

The initial time in seconds a node scan will be delayed if a backoff file is found. The default value is 30s.

ROX_NODE_SCANNING_MAX_BACKOFF

The upper limit of backoff. The default value is 5m, being 50% of Kubernetes restart policy stability timer.

Table 2. Compliance configuration
Environment Variable Description

ROX_NODE_SCANNING_INTERVAL

The base value of the interval duration between node scans. The deafult value is 4h.

ROX_NODE_SCANNING_INTERVAL_DEVIATION

The duration of node scans may differ from the base interval time. However, the maximum value is limited by the ROX_NODE_SCANNING_INTERVAL.

ROX_NODE_SCANNING_MAX_INITIAL_WAIT

The maximum wait time before the first node scan, which is randomly generated. You can set this value to 0 to disable the initial node scanning wait time. The default value is 5m.

Identifying vulnerabilities in nodes

You can use the Vulnerability Management view to identify vulnerabilities in your nodes. The identified vulnerabilities include vulnerabilities in:

  • Core Kubernetes components.

  • Container runtimes (Docker, CRI-O, runC, and containerd).

    • Red Hat Advanced Cluster Security for Kubernetes can identify vulnerabilities in the following operating systems:

      • Amazon Linux 2

      • CentOS

      • Debian

      • Garden Linux (Debian 11)

      • Red Hat Enterprise Linux CoreOS (RHCOS)

      • Red Hat Enterprise Linux (RHEL)

      • Ubuntu (AWS, Microsoft Azure, GCP, and GKE specific versions)

Procedure
  1. In the RHACS portal, go to Vulnerability ManagementDashboard.

  2. Select Nodes on the Dashboard view header to view a list of all the CVEs affecting your nodes.

  3. Select a node from the list to view details of all CVEs affecting that node.

    1. When you select a node, the Node details panel opens for the selected node. The Node view shows in-depth details of the node and includes information about CVEs by CVSS score and fixable CVEs for that node.

    2. Select View All on the CVEs by CVSS score widget header to view a list of all the CVEs in the selected node. You can also filter the list of CVEs.

    3. To export the fixable CVEs as a CSV file, select Export as CSV under the Node Findings section.