apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
storageClassName: hyperdisk-sc (1)
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2048Gi (2)
OpenShift Dedicated can provision persistent volumes (PVs) using the Container Storage Interface (CSI) driver for Google Cloud Platform (GCP) persistent disk (PD) storage.
Familiarity with persistent storage and configuring CSI volumes is recommended when working with a Container Storage Interface (CSI) Operator and driver.
To create CSI-provisioned persistent volumes (PVs) that mount to GCP PD storage assets, OpenShift Dedicated installs the GCP PD CSI Driver Operator and the GCP PD CSI driver by default in the openshift-cluster-csi-drivers
namespace.
GCP PD CSI Driver Operator: By default, the Operator provides a storage class that you can use to create PVCs. You can disable this default storage class if desired (see Managing the default storage class). You also have the option to create the GCP PD storage class as described in Persistent storage using GCE Persistent Disk.
GCP PD driver: The driver enables you to create and mount GCP PD PVs.
GCP PD CSI driver supports the C3 instance type for bare metal and N4 machine series. The C3 instance type and N4 machine series support the hyperdisk-balanced disks.
The GCP PD CSI driver support for the C3 instance type for bare metal and N4 machine series have the following limitations:
Cloning volumes is not supported when using storage pools.
For cloning or resizing, hyperdisk-balanced disks original volume size must be 6Gi or greater.
The default storage class is standard-csi.
You need to manually create a storage class. For information about creating the storage class, see Step 2 in Section Setting up hyperdisk-balanced disks. |
Clusters with mixed virtual machines (VMs) that use different storage types, for example, N2 and N4, are not supported. This is due to hyperdisks-balanced disks not being usable on most legacy VMs. Similarly, regular persistent disks are not usable on N4/C3 VMs.
A GCP cluster with c3-standard-2, c3-standard-4, n4-standard-2, and n4-standard-4 nodes can erroneously exceed the maximum attachable disk number, which should be 16 (JIRA link).
Hyperdisk storage pools can be used with Compute Engine for large-scale storage. A hyperdisk storage pool is a purchased collection of capacity, throughput, and IOPS, which you can then provision for your applications as needed. You can use hyperdisk storage pools to create and manage disks in pools and use the disks across multiple workloads. By managing disks in aggregate, you can save costs while achieving expected capacity and performance growth. By using only the storage that you need in hyperdisk storage pools, you reduce the complexity of forecasting capacity and reduce management by going from managing hundreds of disks to managing a single storage pool.
To set up storage pools, see Setting up hyperdisk-balanced disks.
Access to the cluster with administrative privileges
To set up hyperdisk-balanced disks:
Create a persistent volume claim (PVC) that uses the hyperdisk-specific storage class using the following example YAML file:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
storageClassName: hyperdisk-sc (1)
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2048Gi (2)
1 | PVC references the the storage pool-specific storage class. In this example, hyperdisk-sc . |
2 | Target storage capacity of the hyperdisk-balanced volume. In this example, 2048Gi . |
Create a deployment that uses the PVC that you just created. Using a deployment helps ensure that your application has access to the persistent storage even after the pod restarts and rescheduling:
Ensure a node pool with the specified machine series is up and running before creating the deployment. Otherwise, the pod fails to schedule.
Use the following example YAML file to create the deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
nodeSelector:
cloud.google.com/machine-family: n4 (1)
containers:
- name: postgres
image: postgres:14-alpine
args: [ "sleep", "3600" ]
volumeMounts:
- name: sdk-volume
mountPath: /usr/share/data/
volumes:
- name: sdk-volume
persistentVolumeClaim:
claimName: my-pvc (2)
1 | Specifies the machine family. In this example, it is n4 . |
2 | Specifies the name of the PVC created in the preceding step. In this example, it is my-pfc . |
Confirm that the deployment was successfully created by running the following command:
$ oc get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
postgres 0/1 1 0 42s
It might take a few minutes for hyperdisk instances to complete provisioning and display a READY status.
Confirm that PVC my-pvc
has been successfully bound to a persistent volume (PV) by running the following command:
$ oc get pvc my-pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
my-pvc Bound pvc-1ff52479-4c81-4481-aa1d-b21c8f8860c6 2Ti RWO hyperdisk-sc <unset> 2m24s
Confirm the expected configuration of your hyperdisk-balanced disk:
$ gcloud compute disks list
NAME LOCATION LOCATION_SCOPE SIZE_GB TYPE STATUS
instance-20240914-173145-boot us-central1-a zone 150 pd-standard READY
instance-20240914-173145-data-workspace us-central1-a zone 100 pd-balanced READY
c4a-rhel-vm us-central1-a zone 50 hyperdisk-balanced READY (1)
1 | Hyperdisk-balanced disk. |
If using storage pools, check that the volume is provisioned as specified in your storage class and PVC by running the following command:
$ gcloud compute storage-pools list-disks pool-us-east4-c --zone=us-east4-c
NAME STATUS PROVISIONED_IOPS PROVISIONED_THROUGHPUT SIZE_GB
pvc-1ff52479-4c81-4481-aa1d-b21c8f8860c6 READY 3000 140 2048
Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code.
CSI Operators give OpenShift Dedicated users storage options, such as volume snapshots, that are not possible with in-tree volume plugins.
The Google Cloud Platform (GCP) persistent disk (PD) Container Storage Interface (CSI) driver uses the CSI external-provisioner
sidecar as a controller. This is a separate helper container that is deployed with the CSI driver. The sidecar manages persistent volumes (PVs) by triggering the CreateVolume
operation.
The GCP PD CSI driver uses the csi.storage.k8s.io/fstype
parameter key to support dynamic provisioning. The following table describes all the GCP PD CSI storage class parameters that are supported by OpenShift Dedicated.
Parameter | Values | Default | Description |
---|---|---|---|
|
|
|
Allows you to choose between standard PVs or solid-state-drive PVs. The driver does not validate the value, thus all the possible values are accepted. |
|
|
|
Allows you to choose between zonal or regional PVs. |
|
Fully qualified resource identifier for the key to use to encrypt new disks. |
Empty string |
Uses customer-managed encryption keys (CMEK) to encrypt new disks. |
When you create a PersistentVolumeClaim
object, OpenShift Dedicated provisions a new persistent volume (PV) and creates a PersistentVolume
object. You can add a custom encryption key in Google Cloud Platform (GCP) to protect a PV in your cluster by encrypting the newly created PV.
For encryption, the newly attached PV that you create uses customer-managed encryption keys (CMEK) on a cluster by using a new or existing Google Cloud Key Management Service (KMS) key.
You are logged in to a running OpenShift Dedicated cluster.
You have created a Cloud KMS key ring and key version.
For more information about CMEK and Cloud KMS resources, see Using customer-managed encryption keys (CMEK).
To create a custom-encrypted PV, complete the following steps:
Create a storage class with the Cloud KMS key. The following example enables dynamic provisioning of encrypted volumes:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-gce-pd-cmek
provisioner: pd.csi.storage.gke.io
volumeBindingMode: "WaitForFirstConsumer"
allowVolumeExpansion: true
parameters:
type: pd-standard
disk-encryption-kms-key: projects/<key-project-id>/locations/<location>/keyRings/<key-ring>/cryptoKeys/<key> (1)
1 | This field must be the resource identifier for the key that will be used to encrypt new disks. Values are case-sensitive. For more information about providing key ID values, see Retrieving a resource’s ID and Getting a Cloud KMS resource ID. |
You cannot add the |
Deploy the storage class on your OpenShift Dedicated cluster using the oc
command:
$ oc describe storageclass csi-gce-pd-cmek
Name: csi-gce-pd-cmek
IsDefaultClass: No
Annotations: None
Provisioner: pd.csi.storage.gke.io
Parameters: disk-encryption-kms-key=projects/key-project-id/locations/location/keyRings/ring-name/cryptoKeys/key-name,type=pd-standard
AllowVolumeExpansion: true
MountOptions: none
ReclaimPolicy: Delete
VolumeBindingMode: WaitForFirstConsumer
Events: none
Create a file named pvc.yaml
that matches the name of your storage class object that you created in the previous step:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: podpvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: csi-gce-pd-cmek
resources:
requests:
storage: 6Gi
If you marked the new storage class as default, you can omit the |
Apply the PVC on your cluster:
$ oc apply -f pvc.yaml
Get the status of your PVC and verify that it is created and bound to a newly provisioned PV:
$ oc get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
podpvc Bound pvc-e36abf50-84f3-11e8-8538-42010a800002 10Gi RWO csi-gce-pd-cmek 9s
If your storage class has the |
Your CMEK-protected PV is now ready to use with your OpenShift Dedicated cluster.