$ roxctl -e "$ROX_CENTRAL_ADDRESS" central backup
roxctl
CLI
You can upgrade to the latest version of Red Hat Advanced Cluster Security for Kubernetes (RHACS) from a supported older version.
|
To upgrade RHACS to the latest version, perform the following steps:
You can back up the Central database and use that backup for rolling back from a failed upgrade or data restoration in the case of an infrastructure disaster.
You must have an API token with read
permission for all resources of Red Hat Advanced Cluster Security for Kubernetes. The Analyst system role has read
permissions for all resources.
You have installed the roxctl
CLI.
You have configured the ROX_API_TOKEN
and the ROX_CENTRAL_ADDRESS
environment variables.
Run the backup command:
$ roxctl -e "$ROX_CENTRAL_ADDRESS" central backup
roxctl
CLITo upgrade the roxctl
CLI to the latest version you must uninstall the existing version of roxctl
CLI and then install the latest version of the roxctl
CLI.
You can uninstall the roxctl
CLI binary on Linux by using the following procedure.
Find and delete the roxctl
binary:
$ ROXPATH=$(which roxctl) && rm -f $ROXPATH (1)
1 | Depending on your environment, you might need administrator rights to delete the roxctl binary. |
You can install the roxctl
CLI binary on Linux by using the following procedure.
|
Determine the roxctl
architecture for the target operating system:
$ arch="$(uname -m | sed "s/x86_64//")"; arch="${arch:+-$arch}"
Download the roxctl
CLI:
$ curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.6.1/bin/Linux/roxctl${arch}"
Make the roxctl
binary executable:
$ chmod +x roxctl
Place the roxctl
binary in a directory that is on your PATH
:
To check your PATH
, execute the following command:
$ echo $PATH
Verify the roxctl
version you have installed:
$ roxctl version
You can install the roxctl
CLI binary on macOS by using the following procedure.
|
Determine the roxctl
architecture for the target operating system:
$ arch="$(uname -m | sed "s/x86_64//")"; arch="${arch:+-$arch}"
Download the roxctl
CLI:
$ curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.6.1/bin/Darwin/roxctl${arch}"
Remove all extended attributes from the binary:
$ xattr -c roxctl
Make the roxctl
binary executable:
$ chmod +x roxctl
Place the roxctl
binary in a directory that is on your PATH
:
To check your PATH
, execute the following command:
$ echo $PATH
Verify the roxctl
version you have installed:
$ roxctl version
You can install the roxctl
CLI binary on Windows by using the following procedure.
|
Download the roxctl
CLI:
$ curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.6.1/bin/Windows/roxctl.exe
Verify the roxctl
version you have installed:
$ roxctl version
After you have created a backup of the Central database and generated the necessary resources by using the provisioning bundle, the next step is to upgrade the Central cluster. This process involves upgrading Central and Scanner.
You can update Central to the latest version by downloading and deploying the updated images.
Run the following command to update the Central image:
$ oc -n stackrox set image deploy/central central=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.6.1 (1)
1 | If you use Kubernetes, enter kubectl instead of oc . |
Verify that the new pods have deployed:
$ oc get deploy -n stackrox -o wide
$ oc get pod -n stackrox --watch
Upgrading to version 4.4 requires that you manually replace the GOMEMLIMIT
environment variable with the ROX_MEMLIMIT
environment variable. You must edit this variable for each deployment.
Run the following command to edit the variable for the Central deployment:
$ oc -n stackrox edit deploy/central (1)
1 | If you use Kubernetes, enter kubectl instead of oc . |
Replace the GOMEMLIMIT
variable with ROX_MEMLIMIT
.
Save the file.
You can update Scanner to the latest version by downloading and deploying the updated images.
Run the following command to update the Scanner image:
$ oc -n stackrox set image deploy/scanner scanner=registry.redhat.io/advanced-cluster-security/rhacs-scanner-rhel8:4.6.1 (1)
1 | If you use Kubernetes, enter kubectl instead of oc . |
Verify that the new pods have deployed:
$ oc get deploy -n stackrox -o wide
$ oc get pod -n stackrox --watch
Upgrading to version 4.4 requires that you manually replace the GOMEMLIMIT
environment variable with the ROX_MEMLIMIT
environment variable. You must edit this variable for each deployment.
Run the following command to edit the variable for the Scanner deployment:
$ oc -n stackrox edit deploy/scanner (1)
1 | If you use Kubernetes, enter kubectl instead of oc . |
Replace the GOMEMLIMIT
variable with ROX_MEMLIMIT
.
Save the file.
After you have upgraded both Central and Scanner, verify that the Central cluster upgrade is complete.
Check the Central logs by running the following command:
$ oc logs -n stackrox deploy/central -c central (1)
1 | If you use Kubernetes, enter kubectl instead of oc . |
No database restore directory found (this is not an error).
Migrator: 2023/04/19 17:58:54: starting DB compaction
Migrator: 2023/04/19 17:58:54: Free fraction of 0.0391 (40960/1048576) is < 0.7500. Will not compact
badger 2023/04/19 17:58:54 INFO: All 1 tables opened in 2ms
badger 2023/04/19 17:58:55 INFO: Replaying file id: 0 at offset: 846357
badger 2023/04/19 17:58:55 INFO: Replay took: 50.324µs
badger 2023/04/19 17:58:55 DEBUG: Value log discard stats empty
Migrator: 2023/04/19 17:58:55: DB is up to date. Nothing to do here.
badger 2023/04/19 17:58:55 INFO: Got compaction priority: {level:0 score:1.73 dropPrefix:[]}
version: 2023/04/19 17:58:55.189866 ensure.go:49: Info: Version found in the DB was current. We’re good to go!
After upgrading Central services, you must upgrade all secured clusters.
|
To complete manual upgrades of each secured cluster running Sensor, Collector, and Admission controller, follow the instructions in this section.
You must update the sensor, collector and compliance images on each secured cluster when not using automatic upgrades.
If you are using Kubernetes, use |
Update the Sensor image:
$ oc -n stackrox set image deploy/sensor sensor=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.6.1 (1)
1 | If you use Kubernetes, enter kubectl instead of oc . |
Update the Compliance image:
$ oc -n stackrox set image ds/collector compliance=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.6.1 (1)
1 | If you use Kubernetes, enter kubectl instead of oc . |
Update the Collector image:
$ oc -n stackrox set image ds/collector collector=registry.redhat.io/advanced-cluster-security/rhacs-collector-rhel8:4.6.1 (1)
1 | If you use Kubernetes, enter kubectl instead of oc . |
If you are using the collector slim image, run the following command instead:
|
Update the admission control image:
$ oc -n stackrox set image deploy/admission-control admission-control=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.6.1
If you have installed RHACS on Red Hat OpenShift by using the For more information, see "Migrating SCCs during the manual upgrade" in the "Additional resources" section. |
When upgrading to version 4.6 or later from a version earlier than 4.6, you must patch the sensor and admission-control deployments to set the POD_NAMESPACE
environment variable.
If you are using Kubernetes, use |
Patch sensor to ensure POD_NAMESPACE
is set by running the following command:
$ [[ -z "$(oc -n stackrox get deployment sensor -o yaml | grep POD_NAMESPACE)" ]] && oc -n stackrox patch deployment sensor --type=json -p '[{"op":"add","path":"/spec/template/spec/containers/0/env/-","value":{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}}]'
Patch admission-control to ensure POD_NAMESPACE
is set by running the following command:
$ [[ -z "$(oc -n stackrox get deployment admission-control -o yaml | grep POD_NAMESPACE)" ]] && oc -n stackrox patch deployment admission-control --type=json -p '[{"op":"add","path":"/spec/template/spec/containers/0/env/-","value":{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}}]'
By migrating the security context constraints (SCCs) during the manual upgrade by using roxctl
CLI, you can seamlessly transition the Red Hat Advanced Cluster Security for Kubernetes (RHACS) services to use the Red Hat OpenShift SCCs, ensuring compatibility and optimal security configurations across Central and all secured clusters.
List all of the RHACS services that are deployed on Central and all secured clusters:
$ oc -n stackrox describe pods | grep 'openshift.io/scc\|^Name:'
Name: admission-control-6f4dcc6b4c-2phwd
openshift.io/scc: stackrox-admission-control
#...
Name: central-575487bfcb-sjdx8
openshift.io/scc: stackrox-central
Name: central-db-7c7885bb-6bgbd
openshift.io/scc: stackrox-central-db
Name: collector-56nkr
openshift.io/scc: stackrox-collector
#...
Name: scanner-68fc55b599-f2wm6
openshift.io/scc: stackrox-scanner
Name: scanner-68fc55b599-fztlh
#...
Name: sensor-84545f86b7-xgdwf
openshift.io/scc: stackrox-sensor
#...
In this example, you can see that each pod has its own custom SCC, which is specified through the openshift.io/scc
field.
Add the required roles and role bindings to use the Red Hat OpenShift SCCs instead of the RHACS custom SCCs.
To add the required roles and role bindings to use the Red Hat OpenShift SCCs for the Central cluster, complete the following steps:
Create a file named update-central.yaml
that defines the role and role binding resources by using the following content:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role (1)
metadata:
annotations:
email: support@stackrox.com
owner: stackrox
labels:
app.kubernetes.io/component: central
app.kubernetes.io/instance: stackrox-central-services
app.kubernetes.io/name: stackrox
app.kubernetes.io/part-of: stackrox-central-services
app.kubernetes.io/version: 4.4.0
name: use-central-db-scc (2)
namespace: stackrox (3)
Rules: (4)
- apiGroups:
- security.openshift.io
resourceNames:
- nonroot-v2
resources:
- securitycontextconstraints
verbs:
- use
- - -
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
annotations:
email: support@stackrox.com
owner: stackrox
labels:
app.kubernetes.io/component: central
app.kubernetes.io/instance: stackrox-central-services
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: stackrox
app.kubernetes.io/part-of: stackrox-central-services
app.kubernetes.io/version: 4.4.0
name: use-central-scc
namespace: stackrox
rules:
- apiGroups:
- security.openshift.io
resourceNames:
- nonroot-v2
resources:
- securitycontextconstraints
verbs:
- use
- - -
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
annotations:
email: support@stackrox.com
owner: stackrox
labels:
app.kubernetes.io/component: scanner
app.kubernetes.io/instance: stackrox-central-services
app.kubernetes.io/name: stackrox
app.kubernetes.io/part-of: stackrox-central-services
app.kubernetes.io/version: 4.4.0
name: use-scanner-scc
namespace: stackrox
rules:
- apiGroups:
- security.openshift.io
resourceNames:
- nonroot-v2
resources:
- securitycontextconstraints
verbs:
- use
- - -
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding (5)
metadata:
annotations:
email: support@stackrox.com
owner: stackrox
labels:
app.kubernetes.io/component: central
app.kubernetes.io/instance: stackrox-central-services
app.kubernetes.io/name: stackrox
app.k ubernetes.io/part-of: stackrox-central-services
app.kubernetes.io/version: 4.4.0
name: central-db-use-scc (6)
namespace: stackrox
roleRef: (7)
apiGroup: rbac.authorization.k8s.io
kind: Role
name: use-central-db-scc
subjects: (8)
- kind: ServiceAccount
name: central-db
namespace: stackrox
- - -
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
annotations:
email: support@stackrox.com
owner: stackrox
labels:
app.kubernetes.io/component: central
app.kubernetes.io/instance: stackrox-central-services
app.kubernetes.io/name: stackrox
app.kubernetes.io/part-of: stackrox-central-services
app.kubernetes.io/version: 4.4.0
name: central-use-scc
namespace: stackrox
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: use-central-scc
subjects:
- kind: ServiceAccount
name: central
namespace: stackrox
- - -
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
annotations:
email: support@stackrox.com
owner: stackrox
labels:
app.kubernetes.io/component: scanner
app.kubernetes.io/instance: stackrox-central-services
app.kubernetes.io/name: stackrox
app.kubernetes.io/part-of: stackrox-central-services
app.kubernetes.io/version: 4.4.0
name: scanner-use-scc
namespace: stackrox
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: use-scanner-scc
subjects:
- kind: ServiceAccount
name: scanner
namespace: stackrox
- - -
1 | The type of Kubernetes resource, in this example, Role . |
2 | The name of the role resource. |
3 | The namespace in which the role is created. |
4 | Describes the permissions granted by the role resource. |
5 | The type of Kubernetes resource, in this example, RoleBinding . |
6 | The name of the role binding resource. |
7 | Specifies the role to bind in the same namespace. |
8 | Specifies the subjects that are bound to the role. |
Create the role and role binding resources specified in the update-central.yaml
file by running the following command:
$ oc -n stackrox create -f ./update-central.yaml
To add the required roles and role bindings to use the Red Hat OpenShift SCCs for all secured clusters, complete the following steps:
Create a file named upgrade-scs.yaml
that defines the role and role binding resources by using the following content:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role (1)
metadata:
annotations:
email: support@stackrox.com
owner: stackrox
labels:
app.kubernetes.io/component: collector
app.kubernetes.io/instance: stackrox-secured-cluster-services
app.kubernetes.io/name: stackrox
app.kubernetes.io/part-of: stackrox-secured-cluster-services
app.kubernetes.io/version: 4.4.0
auto-upgrade.stackrox.io/component: sensor
name: use-privileged-scc (2)
namespace: stackrox (3)
rules: (4)
- apiGroups:
- security.openshift.io
resourceNames:
- privileged
resources:
- securitycontextconstraints
verbs:
- use
- - -
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding (5)
metadata:
annotations:
email: support@stackrox.com
owner: stackrox
labels:
app.kubernetes.io/component: collector
app.kubernetes.io/instance: stackrox-secured-cluster-services
app.kubernetes.io/name: stackrox
app.kubernetes.io/part-of: stackrox-secured-cluster-services
app.kubernetes.io/version: 4.4.0
auto-upgrade.stackrox.io/component: sensor
name: collector-use-scc (6)
namespace: stackrox
roleRef: (7)
apiGroup: rbac.authorization.k8s.io
kind: Role
name: use-privileged-scc
subjects: (8)
- kind: ServiceAccount
name: collector
namespace: stackrox
- - -
1 | The type of Kubernetes resource, in this example, Role . |
2 | The name of the role resource. |
3 | The namespace in which the role is created. |
4 | Describes the permissions granted by the role resource. |
5 | The type of Kubernetes resource, in this example, RoleBinding . |
6 | The name of the role binding resource. |
7 | Specifies the role to bind in the same namespace. |
8 | Specifies the subjects that are bound to the role. |
Create the role and role binding resources specified in the upgrade-scs.yaml
file by running the following command:
$ oc -n stackrox create -f ./update-scs.yaml
You must run this command on each secured cluster to create the role and role bindings specified in the |
Delete the SCCs that are specific to RHACS:
To delete the SCCs that are specific to the Central cluster, run the following command:
$ oc delete scc/stackrox-central scc/stackrox-central-db scc/stackrox-scanner
To delete the SCCs that are specific to all secured clusters, run the following command:
$ oc delete scc/stackrox-admission-control scc/stackrox-collector scc/stackrox-sensor
You must run this command on each secured cluster to delete the SCCs that are specific to each secured cluster. |
Ensure that all the pods are using the correct SCCs by running the following command:
$ oc -n stackrox describe pods | grep 'openshift.io/scc\|^Name:'
Compare the output with the following table:
Component | Previous custom SCC | New Red Hat OpenShift 4 SCC |
---|---|---|
Central |
|
|
Central-db |
|
|
Scanner |
|
|
Scanner-db |
|
|
Admission Controller |
|
|
Collector |
|
|
Sensor |
|
|
Upgrading to version 4.4 requires that you manually replace the GOMEMLIMIT
environment variable with the ROX_MEMLIMIT
environment variable. You must edit this variable for each deployment.
Run the following command to edit the variable for the Sensor deployment:
$ oc -n stackrox edit deploy/sensor (1)
1 | If you use Kubernetes, enter kubectl instead of oc . |
Replace the GOMEMLIMIT
variable with ROX_MEMLIMIT
.
Save the file.
Upgrading to version 4.4 requires that you manually replace the GOMEMLIMIT
environment variable with the ROX_MEMLIMIT
environment variable. You must edit this variable for each deployment.
Run the following command to edit the variable for the Collector deployment:
$ oc -n stackrox edit deploy/collector (1)
1 | If you use Kubernetes, enter kubectl instead of oc . |
Replace the GOMEMLIMIT
variable with ROX_MEMLIMIT
.
Save the file.
Upgrading to version 4.4 requires that you manually replace the GOMEMLIMIT
environment variable with the ROX_MEMLIMIT
environment variable. You must edit this variable for each deployment.
Run the following command to edit the variable for the Admission Controller deployment:
$ oc -n stackrox edit deploy/admission-control (1)
1 | If you use Kubernetes, enter kubectl instead of oc . |
Replace the GOMEMLIMIT
variable with ROX_MEMLIMIT
.
Save the file.
After you have upgraded secured clusters, verify that the updated pods are working.
Check that the new pods have deployed:
$ oc get deploy,ds -n stackrox -o wide (1)
1 | If you use Kubernetes, enter kubectl instead of oc . |
$ oc get pod -n stackrox --watch (1)
1 | If you use Kubernetes, enter kubectl instead of oc . |
If you use OpenShift Container Platform, you can enable scanning of Red Hat Enterprise Linux CoreOS (RHCOS) nodes for vulnerabilities by using Red Hat Advanced Cluster Security for Kubernetes (RHACS).
For scanning RHCOS node hosts of the secured cluster, you must have installed Secured Cluster services on OpenShift Container Platform 4.12 or later. For information about supported platforms and architecture, see the Red Hat Advanced Cluster Security for Kubernetes Support Matrix. For life cycle support information for RHACS, see the Red Hat Advanced Cluster Security for Kubernetes Support Policy.
Run one of the following commands to update the compliance container.
For a default compliance container with metrics disabled, run the following command:
$ oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"containers":[{"name":"compliance","env":[{"name":"ROX_METRICS_PORT","value":"disabled"},{"name":"ROX_NODE_SCANNING_ENDPOINT","value":"127.0.0.1:8444"},{"name":"ROX_NODE_SCANNING_INTERVAL","value":"4h"},{"name":"ROX_NODE_SCANNING_INTERVAL_DEVIATION","value":"24m"},{"name":"ROX_NODE_SCANNING_MAX_INITIAL_WAIT","value":"5m"},{"name":"ROX_RHCOS_NODE_SCANNING","value":"true"},{"name":"ROX_CALL_NODE_INVENTORY_ENABLED","value":"true"}]}]}}}}'
For a compliance container with Prometheus metrics enabled, run the following command:
$ oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"containers":[{"name":"compliance","env":[{"name":"ROX_METRICS_PORT","value":":9091"},{"name":"ROX_NODE_SCANNING_ENDPOINT","value":"127.0.0.1:8444"},{"name":"ROX_NODE_SCANNING_INTERVAL","value":"4h"},{"name":"ROX_NODE_SCANNING_INTERVAL_DEVIATION","value":"24m"},{"name":"ROX_NODE_SCANNING_MAX_INITIAL_WAIT","value":"5m"},{"name":"ROX_RHCOS_NODE_SCANNING","value":"true"},{"name":"ROX_CALL_NODE_INVENTORY_ENABLED","value":"true"}]}]}}}}'
Update the Collector DaemonSet (DS) by taking the following steps:
Add new volume mounts to Collector DS by running the following command:
$ oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"volumes":[{"name":"tmp-volume","emptyDir":{}},{"name":"cache-volume","emptyDir":{"sizeLimit":"200Mi"}}]}}}}'
Add the new NodeScanner
container by running the following command:
$ oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"containers":[{"command":["/scanner","--nodeinventory","--config=",""],"env":[{"name":"ROX_NODE_NAME","valueFrom":{"fieldRef":{"apiVersion":"v1","fieldPath":"spec.nodeName"}}},{"name":"ROX_CLAIR_V4_SCANNING","value":"true"},{"name":"ROX_COMPLIANCE_OPERATOR_INTEGRATION","value":"true"},{"name":"ROX_CSV_EXPORT","value":"false"},{"name":"ROX_DECLARATIVE_CONFIGURATION","value":"false"},{"name":"ROX_INTEGRATIONS_AS_CONFIG","value":"false"},{"name":"ROX_NETPOL_FIELDS","value":"true"},{"name":"ROX_NETWORK_DETECTION_BASELINE_SIMULATION","value":"true"},{"name":"ROX_NETWORK_GRAPH_PATTERNFLY","value":"true"},{"name":"ROX_NODE_SCANNING_CACHE_TIME","value":"3h36m"},{"name":"ROX_NODE_SCANNING_INITIAL_BACKOFF","value":"30s"},{"name":"ROX_NODE_SCANNING_MAX_BACKOFF","value":"5m"},{"name":"ROX_PROCESSES_LISTENING_ON_PORT","value":"false"},{"name":"ROX_QUAY_ROBOT_ACCOUNTS","value":"true"},{"name":"ROX_ROXCTL_NETPOL_GENERATE","value":"true"},{"name":"ROX_SOURCED_AUTOGENERATED_INTEGRATIONS","value":"false"},{"name":"ROX_SYSLOG_EXTRA_FIELDS","value":"true"},{"name":"ROX_SYSTEM_HEALTH_PF","value":"false"},{"name":"ROX_VULN_MGMT_WORKLOAD_CVES","value":"false"}],"image":"registry.redhat.io/advanced-cluster-security/rhacs-scanner-slim-rhel8:4.6.1","imagePullPolicy":"IfNotPresent","name":"node-inventory","ports":[{"containerPort":8444,"name":"grpc","protocol":"TCP"}],"volumeMounts":[{"mountPath":"/host","name":"host-root-ro","readOnly":true},{"mountPath":"/tmp/","name":"tmp-volume"},{"mountPath":"/cache","name":"cache-volume"}]}]}}}}'
You can roll back to a previous version of Central if the upgrade to a new version is unsuccessful.
You can roll back to a previous version of Central if upgrading Red Hat Advanced Cluster Security for Kubernetes fails.
Before you can perform a rollback, you must have free disk space available on your persistent storage. Red Hat Advanced Cluster Security for Kubernetes uses disk space to keep a copy of databases during the upgrade. If the disk space is not enough to store a copy and the upgrade fails, you might not be able to roll back to an earlier version.
Run the following command to roll back to a previous version when an upgrade fails (before the Central service starts):
$ oc -n stackrox rollout undo deploy/central (1)
1 | If you use Kubernetes, enter kubectl instead of oc . |
You can use forced rollback to roll back to an earlier version of Central (after the Central service starts).
Using forced rollback to switch back to a previous version might result in loss of data and functionality. |
Before you can perform a rollback, you must have free disk space available on your persistent storage. Red Hat Advanced Cluster Security for Kubernetes uses disk space to keep a copy of databases during the upgrade. If the disk space is not enough to store a copy and the upgrade fails, you will not be able to roll back to an earlier version.
Run the following commands to perform a forced rollback:
To forcefully rollback to the previously installed version:
$ oc -n stackrox rollout undo deploy/central (1)
1 | If you use Kubernetes, enter kubectl instead of oc . |
To forcefully rollback to a specific version:
Edit Central’s ConfigMap
:
$ oc -n stackrox edit configmap/central-config (1)
1 | If you use Kubernetes, enter kubectl instead of oc . |
Update the value of the maintenance.forceRollbackVersion
key:
data:
central-config.yaml: |
maintenance:
safeMode: false
compaction:
enabled: true
bucketFillFraction: .5
freeFractionThreshold: 0.75
forceRollbackVersion: <x.x.x.x> (1)
...
1 | Specify the version that you want to roll back to. |
Update the Central image version:
$ oc -n stackrox \ (1)
set image deploy/central central=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:<x.x.x.x> (2)
1 | If you use Kubernetes, enter kubectl instead of oc . |
2 | Specify the version that you want to roll back to. It must be the same version that you specified for the maintenance.forceRollbackVersion key in the central-config config map. |
The updated Sensors and Collectors continue to report the latest data from each secured cluster.
The last time Sensor contacted Central is visible in the RHACS portal.
In the RHACS portal, go to Platform Configuration → System Health.
Check to ensure that Sensor Upgrade shows clusters up to date with Central.
For security reasons, Red Hat recommends that you revoke the API token that you have used to complete Central database backup.
After the upgrade, you must reload the RHACS portal page and re-accept the certificate to continue using the RHACS portal.
In the RHACS portal, go to Platform Configuration → Integrations.
Scroll down to the Authentication Tokens category, and click API Token.
Select the checkbox in front of the token name that you want to revoke.
Click Revoke.
On the confirmation dialog box, click Confirm.
If you encounter problems when using the legacy installation method for the secured cluster and enabling the automated updates, you can try troubleshooting the problem. The following errors can be found in the clusters view when the upgrader fails.
The following error is displayed in the cluster page:
Upgrader failed to execute PreflightStage of the roll-forward workflow: executing stage "Run preflight checks": preflight check "Kubernetes authorization" reported errors. This usually means that access is denied. Have you configured this Secured Cluster for automatically receiving upgrades?"
Ensure that the bundle for the secured cluster was generated with future upgrades enabled before clicking Download YAML file and keys.
If possible, remove that secured cluster and generate a new bundle making sure that future upgrades are enabled.
If you cannot re-create the cluster, you can take these actions:
Ensure that the service account sensor-upgrader
exists in the same namespace as Sensor.
Ensure that a ClusterRoleBinding exists (default name: <namespace>:upgrade-sensors
) that grants the cluster-admin
ClusterRole to the sensor-upgrader
service account.
The following error is displayed in the cluster page:
"Upgrade initialization error: The upgrader pods have trouble pulling the new image: Error pulling image: (...) (<image_reference:tag>: not found)"
Ensure that the Secured Cluster can access the registry and pull the image <image_reference:tag>
.
Ensure that the image pull secrets are configured correctly in the secured cluster.
The following error is displayed in the cluster page:
"Upgrade initialization error: Pod terminated: (Error)"
Ensure that the upgrader has enough permissions for accessing the cluster objects. For more information, see "Upgrader is missing permissions".
Check the upgrader logs for more insights.
The logs can be accessed by running the following command:
$ kubectl -n <namespace> logs deploy/sensor-upgrader (1)
1 | For <namespace> , specify the namespace in which Sensor is running. |
Usually, the upgrader deployment is only running in the cluster for a short time while doing the upgrades. It is removed later, so accessing its logs using the orchestrator CLI can require proper timing.