Full object store backup
This documentation details the Red Hat responsibilities for the OpenShift Dedicated managed service.
A Red Hat Site Reliability Engineer (SRE) maintains a centralized monitoring and alerting system for all OpenShift Dedicated cluster components, SRE services, and underlying cloud provider accounts. Platform audit logs are securely forwarded to a centralized SIEM (Security Information and Event Monitoring) system, where they might trigger configured alerts to the SRE team and are also subject to manual review. Audit logs are retained in the SIEM for one year. Audit logs for a given cluster are not deleted at the time the cluster is deleted.
An incident is an event that results in a degradation or outage of one or more Red Hat services. An incident can be raised by a customer or Customer Experience and Engagement (CEE) member through a support case, directly by the centralized monitoring and alerting system, or directly by a member of the SRE team.
Depending on the impact on the service and customer, the incident is categorized in terms of severity.
The general workflow of how a new incident is managed by Red Hat:
An SRE first responder is alerted to a new incident, and begins an initial investigation.
After the initial investigation, the incident is assigned an incident lead, who coordinates the recovery efforts.
The incident lead manages all communication and coordination around recovery, including any relevant notifications or support case updates.
The incident is recovered.
The incident is documented and a root cause analysis is performed within 5 business days of the incident.
A root cause analysis (RCA) draft document is shared with the customer within 7 business days of the incident.
Platform notifications are configured using email. Any customer notification is also sent to the corresponding Red Hat account team and if applicable, the Red Hat Technical Account Manager.
The following activities can trigger notifications:
Platform incident
Performance degradation
Cluster capacity warnings
Critical vulnerabilities and resolution
Upgrade scheduling
All OpenShift Dedicated clusters are backed up using cloud provider snapshots. Notably, this does not include customer data stored on persistent volumes (PVs). All snapshots are taken using the appropriate cloud provider snapshot APIs and are uploaded to a secure object storage bucket (S3 in AWS, and GCS in Google Cloud) in the same account as the cluster.
Component | Snapshot frequency | Retention | Notes |
---|---|---|---|
Full object store backup |
Daily |
7 days |
This is a full backup of all Kubernetes objects like etcd. No PVs are backed up in this backup schedule. |
Weekly |
30 days |
||
Full object store backup |
Hourly |
24 hour |
This is a full backup of all Kubernetes objects like etcd. No PVs are backed up in this backup schedule. |
Node root volume |
Never |
N/A |
Nodes are considered to be short-term. Nothing critical should be stored on a node’s root volume. |
Red Hat does not commit to any Recovery Point Objective (RPO) or Recovery Time Objective (RTO).
Customers are responsible for taking regular backups of their data
Customers should deploy multi-AZ clusters with workloads that follow Kubernetes best practices to ensure high availability within a region.
If an entire cloud region is unavailable, customers must install a new cluster in a different region and restore their apps using their backup data.
Evaluating and managing cluster capacity is a responsibility that is shared between Red Hat and the customer. Red Hat SRE is responsible for the capacity of all control plane and infrastructure nodes on the cluster.
Red Hat SRE also evaluates cluster capacity during upgrades and in response to cluster alerts. The impact of a cluster upgrade on capacity is evaluated as part of the upgrade testing process to ensure that capacity is not negatively impacted by new additions to the cluster. During a cluster upgrade, additional worker nodes are added to make sure that total cluster capacity is maintained during the upgrade process.
Capacity evaluations by SRE staff also happen in response to alerts from the cluster, once usage thresholds are exceeded for a certain period of time. Such alerts can also result in a notification to the customer.
This section describes the policies about how cluster and configuration changes, patches, and releases are managed.
You can initiate changes using self-service capabilities such as cluster deployment, worker node scaling, or cluster deletion.
Change history is captured in the Cluster History section in the OpenShift Cluster Manager Overview tab, and is available for you to view. The change history includes, but is not limited to, logs from the following changes:
Adding or removing identity providers
Adding or removing users to or from the dedicated-admins
group
Scaling the cluster compute nodes
Scaling the cluster load balancer
Scaling the cluster persistent storage
Upgrading the cluster
You can implement a maintenance exclusion by avoiding changes in OpenShift Cluster Manager for the following components:
Deleting a cluster
Adding, modifying, or removing identity providers
Adding, modifying, or removing a user from an elevated group
Installing or removing add-ons
Modifying cluster networking configurations
Adding, modifying, or removing machine pools
Enabling or disabling user workload monitoring
Initiating an upgrade
To enforce the maintenance exclusion, ensure machine pool autoscaling or automatic upgrade policies have been disabled. After the maintenance exclusion has been lifted, proceed with enabling machine pool autoscaling or automatic upgrade policies as desired. |
Red Hat site reliability engineering (SRE) manages the infrastructure, code, and configuration of OpenShift Dedicated using a GitOps workflow and fully automated CI/CD pipelines. This process ensures that Red Hat can safely introduce service improvements on a continuous basis without negatively impacting customers.
Every proposed change undergoes a series of automated verifications immediately upon check-in. Changes are then deployed to a staging environment where they undergo automated integration testing. Finally, changes are deployed to the production environment. Each step is fully automated.
An authorized SRE reviewer must approve advancement to each step. The reviewer cannot be the same individual who proposed the change. All changes and approvals are fully auditable as part of the GitOps workflow.
Some changes are released to production incrementally, using feature flags to control availability of new features to specified clusters or customers.
OpenShift Container Platform software and the underlying immutable Red Hat Enterprise Linux CoreOS (RHCOS) operating system image are patched for bugs and vulnerabilities in regular z-stream upgrades. Read more about RHCOS architecture in the OpenShift Container Platform documentation.
Red Hat does not automatically upgrade your clusters. You can schedule to upgrade the clusters at regular intervals (recurring upgrade) or just once (individual upgrade) using the OpenShift Cluster Manager web console. Red Hat might forcefully upgrade a cluster to a new z-stream version only if the cluster is affected by a critical impact CVE. You can review the history of all cluster upgrade events in the OpenShift Cluster Manager web console. For more information about releases, see the Life Cycle policy.
Security and regulation compliance includes tasks, such as the implementation of security controls and compliance certification.
Red Hat defines and follows a data classification standard to determine the sensitivity of data and highlight inherent risk to the confidentiality and integrity of that data while it is collected, used, transmitted stored, and processed. Customer-owned data is classified at the highest level of sensitivity and handling requirements.
OpenShift Dedicated uses cloud provider services such as AWS Key Management Service (KMS) and Google Cloud KMS to help securely manage encryption keys for persistent data. These keys are used for encrypting all control plane, infrastructure, and worker node root volumes. Customers can specify their own KMS key for encrypting root volumes at installation time. Persistent volumes (PVs) also use KMS for key management. Customers can specify their own KMS key for encrypting PVs by creating a new StorageClass
referencing the KMS key Amazon Resource Name (ARN) or ID.
When a customer deletes their OpenShift Dedicated cluster, all cluster data is permanently deleted, including control plane data volumes and customer application data volumes, such a persistent volumes (PV).
Red Hat performs periodic vulnerability scanning of OpenShift Dedicated using industry standard tools. Identified vulnerabilities are tracked to their remediation according to timelines based on severity. Vulnerability scanning and remediation activities are documented for verification by third-party assessors in the course of compliance certification audits.
Each OpenShift Dedicated cluster is protected by a secure network configuration at the cloud infrastructure level using firewall rules (AWS Security Groups or Google Cloud Compute Engine firewall rules). OpenShift Dedicated customers on AWS are also protected against DDoS attacks with AWS Shield Standard.
Customers can optionally configure their OpenShift Dedicated cluster endpoints (web console, API, and application router) to be made private so that the cluster control plane or applications are not accessible from the Internet.
For AWS, customers can configure a private network connection to their OpenShift Dedicated cluster through AWS VPC peering, AWS VPN, or AWS Direct Connect.
At this time, private clusters are not supported for OpenShift Dedicated clusters on Google Cloud. |
Red Hat performs periodic penetration tests against OpenShift Dedicated. Tests are performed by an independent internal team using industry standard tools and best practices.
Any issues that are discovered are prioritized based on severity. Any issues found belonging to open source projects are shared with the community for resolution.
OpenShift Dedicated follows common industry best practices for security and controls. The certifications are outlined in the following table.
Compliance | OpenShift Dedicated on AWS | OpenShift Dedicated on GCP |
---|---|---|
HIPAA Qualified |
Yes (Only Customer Cloud Subscriptions) |
Yes (Only Customer Cloud Subscriptions) |
ISO 27001 |
Yes |
Yes |
PCI DSS |
Yes |
Yes |
SOC 2 Type 2 |
Yes |
Yes |
See Red Hat Subprocessor List for information on SRE residency.
OpenShift Dedicated provides disaster recovery for failures that occur at the pod, worker node, infrastructure node, control plane node, and availability zone levels.
All disaster recovery requires that the customer use best practices for deploying highly available applications, storage, and cluster architecture (for example, single-zone deployment vs. multi-zone deployment) to account for the level of desired availability.
One single-zone cluster will not provide disaster avoidance or recovery in the event of an availability zone or region outage. Multiple single-zone clusters with customer-maintained failover can account for outages at the zone or region levels.
One multi-zone cluster will not provide disaster avoidance or recovery in the event of a full region outage. Multiple multi-zone clusters with customer-maintained failover can account for outages at the region level.
For more information about Red Hat site reliability engineering (SRE) teams access, see Identity and access management.