$ oc explain scansettings
ScanSettingBinding APIs are recommended to run compliance scans with the Compliance Operator. For more information on these API objects, run:
$ oc explain scansettings
$ oc explain scansettingbindings
You can run a scan using the Center for Internet Security (CIS) profiles. For convenience, the Compliance Operator creates a
ScanSetting object with reasonable defaults on startup. This
ScanSetting object is named
For all-in-one control plane and worker nodes, the compliance scan runs twice on the worker and control plane nodes. The compliance scan might generate inconsistent scan results. You can avoid inconsistent results by defining only a single role in the
ScanSetting object by running:
$ oc describe scansettings default -n openshift-compliance
Name: default Namespace: openshift-compliance Labels: <none> Annotations: <none> API Version: compliance.openshift.io/v1alpha1 Kind: ScanSetting Metadata: Creation Timestamp: 2022-10-10T14:07:29Z Generation: 1 Managed Fields: API Version: compliance.openshift.io/v1alpha1 Fields Type: FieldsV1 fieldsV1: f:rawResultStorage: .: f:nodeSelector: .: f:node-role.kubernetes.io/master: f:pvAccessModes: f:rotation: f:size: f:tolerations: f:roles: f:scanTolerations: f:schedule: f:showNotApplicable: f:strictNodeScan: Manager: compliance-operator Operation: Update Time: 2022-10-10T14:07:29Z Resource Version: 56111 UID: c21d1d14-3472-47d7-a450-b924287aec90 Raw Result Storage: Node Selector: node-role.kubernetes.io/master: Pv Access Modes: ReadWriteOnce (1) Rotation: 3 (2) Size: 1Gi (3) Tolerations: Effect: NoSchedule Key: node-role.kubernetes.io/master Operator: Exists Effect: NoExecute Key: node.kubernetes.io/not-ready Operator: Exists Toleration Seconds: 300 Effect: NoExecute Key: node.kubernetes.io/unreachable Operator: Exists Toleration Seconds: 300 Effect: NoSchedule Key: node.kubernetes.io/memory-pressure Operator: Exists Roles: master (4) worker (4) Scan Tolerations: (5) Operator: Exists Schedule: 0 1 * * * (6) Show Not Applicable: false Strict Node Scan: true Events: <none>
|1||The Compliance Operator creates a persistent volume (PV) that contains the results of the scans. By default, the PV will use access mode
|2||The Compliance Operator keeps results of three subsequent scans in the volume; older scans are rotated.|
|3||The Compliance Operator will allocate one GB of storage for the scan results.|
|4||If the scan setting uses any profiles that scan cluster nodes, scan these node roles.|
|5||The default scan setting object scans all the nodes.|
|6||The default scan setting object runs scans at 01:00 each day.|
As an alternative to the default scan setting, you can use
default-auto-apply, which has the following settings:
Name: default-auto-apply Namespace: openshift-compliance Labels: <none> Annotations: <none> API Version: compliance.openshift.io/v1alpha1 Auto Apply Remediations: true Auto Update Remediations: true Kind: ScanSetting Metadata: Creation Timestamp: 2022-10-18T20:21:00Z Generation: 1 Managed Fields: API Version: compliance.openshift.io/v1alpha1 Fields Type: FieldsV1 fieldsV1: f:autoApplyRemediations: (1) f:autoUpdateRemediations: (1) f:rawResultStorage: .: f:nodeSelector: .: f:node-role.kubernetes.io/master: f:pvAccessModes: f:rotation: f:size: f:tolerations: f:roles: f:scanTolerations: f:schedule: f:showNotApplicable: f:strictNodeScan: Manager: compliance-operator Operation: Update Time: 2022-10-18T20:21:00Z Resource Version: 38840 UID: 8cb0967d-05e0-4d7a-ac1c-08a7f7e89e84 Raw Result Storage: Node Selector: node-role.kubernetes.io/master: Pv Access Modes: ReadWriteOnce Rotation: 3 Size: 1Gi Tolerations: Effect: NoSchedule Key: node-role.kubernetes.io/master Operator: Exists Effect: NoExecute Key: node.kubernetes.io/not-ready Operator: Exists Toleration Seconds: 300 Effect: NoExecute Key: node.kubernetes.io/unreachable Operator: Exists Toleration Seconds: 300 Effect: NoSchedule Key: node.kubernetes.io/memory-pressure Operator: Exists Roles: master worker Scan Tolerations: Operator: Exists Schedule: 0 1 * * * Show Not Applicable: false Strict Node Scan: true Events: <none>
ScanSettingBinding object that binds to the default
ScanSetting object and scans the cluster using the
cis-node profiles. For example:
apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: cis-compliance namespace: openshift-compliance profiles: - name: ocp4-cis-node kind: Profile apiGroup: compliance.openshift.io/v1alpha1 - name: ocp4-cis kind: Profile apiGroup: compliance.openshift.io/v1alpha1 settingsRef: name: default kind: ScanSetting apiGroup: compliance.openshift.io/v1alpha1
ScanSettingBinding object by running:
$ oc create -f <file-name>.yaml -n openshift-compliance
At this point in the process, the
ScanSettingBinding object is reconciled and based on the
Binding and the
Bound settings. The Compliance Operator creates a
ComplianceSuite object and the associated
Follow the compliance scan progress by running:
$ oc get compliancescan -w -n openshift-compliance
The scans progress through the scanning phases and eventually reach the
DONE phase when complete. In most cases, the result of the scan is
NON-COMPLIANT. You can review the scan results and start applying remediations to make the cluster compliant. See Managing Compliance Operator remediation for more information.
The result server pod mounts the persistent volume (PV) that stores the raw Asset Reporting Format (ARF) scan results. The
tolerations attributes enable you to configure the location of the result server pod.
This is helpful for those environments where control plane nodes are not permitted to mount persistent volumes.
ScanSetting custom resource (CR) for the Compliance Operator:
ScanSetting CR, and save the YAML file, for example,
apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: rs-on-workers namespace: openshift-compliance rawResultStorage: nodeSelector: node-role.kubernetes.io/worker: "" (1) pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - operator: Exists (2) roles: - worker - master scanTolerations: - operator: Exists schedule: 0 1 * * *
|1||The Compliance Operator uses this node to store scan results in ARF format.|
|2||The result server pod tolerates all taints.|
To create the
ScanSetting CR, run the following command:
$ oc create -f rs-workers.yaml
To verify that the
ScanSetting object is created, run the following command:
$ oc get scansettings rs-on-workers -n openshift-compliance -o yaml
apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: creationTimestamp: "2021-11-19T19:36:36Z" generation: 1 name: rs-on-workers namespace: openshift-compliance resourceVersion: "48305" uid: 43fdfc5f-15a7-445a-8bbc-0e4a160cd46e rawResultStorage: nodeSelector: node-role.kubernetes.io/worker: "" pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - operator: Exists roles: - worker - master scanTolerations: - operator: Exists schedule: 0 1 * * * strictNodeScan: true
ScanSetting Custom Resource now allows you to override the default CPU and memory limits of scanner pods through the scan limits attribute. The Compliance Operator will use defaults of 500Mi memory, 100m CPU for the scanner container, and 200Mi memory with 100m CPU for the
api-resource-collector container. To set the memory limits of the Operator, modify the
Subscription object if installed through OLM or the Operator deployment itself.
To increase the default CPU and memory limits of the Compliance Operator, see Increasing Compliance Operator resource limits.
Increasing the memory limit for the Compliance Operator or the scanner pods is needed if the default limits are not sufficient and the Operator or scanner pods are ended by the Out Of Memory (OOM) process.
When the kubelet starts a container as part of a Pod, the kubelet passes that container’s requests and limits for memory and CPU to the container runtime. In Linux, the container runtime configures the kernel cgroups that apply and enforce the limits you defined.
The CPU limit defines how much CPU time the container can use. During each scheduling interval, the Linux kernel checks to see if this limit is exceeded. If so, the kernel waits before allowing the cgroup to resume execution.
If several different containers (cgroups) want to run on a contended system, workloads with larger CPU requests are allocated more CPU time than workloads with small requests. The memory request is used during Pod scheduling. On a node that uses cgroups v2, the container runtime might use the memory request as a hint to set
If a container attempts to allocate more memory than this limit, the Linux kernel out-of-memory subsystem activates and intervenes by stopping one of the processes in the container that tried to allocate memory. The memory limit for the Pod or container can also apply to pages in memory-backed volumes, such as an emptyDir.
The kubelet tracks
emptyDir volumes as container memory is used, rather than as local ephemeral storage. If a container exceeds its memory request and the node that it runs on becomes short of memory overall, the Pod’s container might be evicted.
A container may not exceed its CPU limit for extended periods. Container run times do not stop Pods or containers for excessive CPU usage. To determine whether a container cannot be scheduled or is being killed due to resource limits, see Troubleshooting the Compliance Operator.
When a Pod is created, the scheduler selects a Node for the Pod to run on. Each node has a maximum capacity for each resource type in the amount of CPU and memory it can provide for the Pods. The scheduler ensures that the sum of the resource requests of the scheduled containers is less than the capacity nodes for each resource type.
Although memory or CPU resource usage on nodes is very low, the scheduler might still refuse to place a Pod on a node if the capacity check fails to protect against a resource shortage on a node.
For each container, you can specify the following resource limits and request:
spec.containers.resources.limits.cpu spec.containers.resources.limits.memory spec.containers.resources.limits.hugepages-<size> spec.containers.resources.requests.cpu spec.containers.resources.requests.memory spec.containers.resources.requests.hugepages-<size>
Although you can specify requests and limits for only individual containers, it is also useful to consider the overall resource requests and limits for a pod. For a particular resource, a pod resource request or limit is the sum of the resource requests or limits of that type for each container in the pod.
apiVersion: v1 kind: Pod metadata: name: frontend spec: containers: - name: app image: images.my-company.example/app:v4 resources: requests: (1) memory: "64Mi" cpu: "250m" limits: (2) memory: "128Mi" cpu: "500m" - name: log-aggregator image: images.my-company.example/log-aggregator:v6 resources: requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m"
|1||The Pod is requesting 64 Mi of memory and 250 m CPU.|
|2||The Pod’s limits are defined as 128 Mi of memory and 500 m CPU.|