$ gcloud services enable file.googleapis.com --project <my_gce_project> (1)
OpenShift Dedicated is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for Google Compute Platform (GCP) Filestore Storage.
Familiarity with persistent storage and configuring CSI volumes is recommended when working with a CSI Operator and driver.
To create CSI-provisioned PVs that mount to GCP Filestore Storage assets, you install the GCP Filestore CSI Driver Operator and the GCP Filestore CSI driver in the openshift-cluster-csi-drivers
namespace.
The GCP Filestore CSI Driver Operator does not provide a storage class by default, but you can create one if needed. The GCP Filestore CSI Driver Operator supports dynamic volume provisioning by allowing storage volumes to be created on demand, eliminating the need for cluster administrators to pre-provision storage.
The GCP Filestore CSI driver enables you to create and mount GCP Filestore PVs.
Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code.
CSI Operators give OpenShift Dedicated users storage options, such as volume snapshots, that are not possible with in-tree volume plugins.
The Google Compute Platform (GCP) Filestore Container Storage Interface (CSI) Driver Operator is not installed in OpenShift Dedicated by default. Use the following procedure to install the GCP Filestore CSI Driver Operator in your cluster.
Access to the OpenShift Dedicated web console.
To install the GCP Filestore CSI Driver Operator from the web console:
Log in to the OpenShift Cluster Manager.
Select your cluster.
Click Open console and log in with your credentials.
Enable the Filestore API in the GCE project by running the following command:
$ gcloud services enable file.googleapis.com --project <my_gce_project> (1)
1 | Replace <my_gce_project> with your Google Cloud project. |
You can also do this using Google Cloud web console.
Install the GCP Filestore CSI Operator:
Click Operators → OperatorHub.
Locate the GCP Filestore CSI Operator by typing GCP Filestore in the filter box.
Click the GCP Filestore CSI Driver Operator button.
On the GCP Filestore CSI Driver Operator page, click Install.
On the Install Operator page, ensure that:
All namespaces on the cluster (default) is selected.
Installed Namespace is set to openshift-cluster-csi-drivers.
Click Install.
After the installation finishes, the GCP Filestore CSI Operator is listed in the Installed Operators section of the web console.
Install the GCP Filestore CSI Driver:
Click administration → CustomResourceDefinitions → ClusterCSIDriver.
On the Instances tab, click Create ClusterCSIDriver.
Use the following YAML file:
apiVersion: operator.openshift.io/v1
kind: ClusterCSIDriver
metadata:
name: filestore.csi.storage.gke.io
spec:
managementState: Managed
Click Create.
Wait for the following Conditions to change to a "true" status:
GCPFilestoreDriverCredentialsRequestControllerAvailable
GCPFilestoreDriverNodeServiceControllerAvailable
GCPFilestoreDriverControllerServiceControllerAvailable
After installing the Operator, you should create a storage class for dynamic provisioning of Google Compute Platform (GCP) Filestore volumes.
You are logged in to the running OpenShift Dedicated cluster.
To create a storage class:
Create a storage class using the following example YAML file:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: filestore-csi
provisioner: filestore.csi.storage.gke.io
parameters:
connect-mode: DIRECT_PEERING (1)
network: network-name (2)
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer
1 | For a shared VPC, use the connect-mode parameter set to PRIVATE_SERVICE_ACCESS . For a non-shared VPC, the value is DIRECT_PEERING , which is the default setting. |
2 | Specify the name of the GCP virtual private cloud (VPC) network where Filestore instances should be created in. |
Specify the name of the VPC network where Filestore instances should be created in.
It is recommended to specify the VPC network that the Filestore instances should be created in. If no VPC network is specified, the Container Storage Interface (CSI) driver tries to create the instances in the default VPC network of the project.
On IPI installations, the VPC network name is typically the cluster name with the suffix "-network". However, on UPI installations, the VPC network name can be any value chosen by the user.
For a shared VPC (connect-mode
= PRIVATE_SERVICE_ACCESS
), the network needs to be the full VPC name. For example: projects/shared-vpc-name/global/networks/gcp-filestore-network
.
You can find out the VPC network name by inspecting the MachineSets
objects with the following command:
$ oc -n openshift-machine-api get machinesets -o yaml | grep "network:"
- network: gcp-filestore-network
(...)
In this example, the VPC network name in this cluster is "gcp-filestore-network".
Typically, if you destroy a cluster, the OpenShift Dedicated installer deletes all of the cloud resources that belong to that cluster. However, due to the special nature of the Google Compute Platform (GCP) Filestore resources, the automated cleanup process might not remove all of them in some rare cases.
Therefore, Red Hat recommends that you verify that all cluster-owned Filestore resources are deleted by the uninstall process.
To ensure that all GCP Filestore PVCs have been deleted:
Access your Google Cloud account using the GUI or CLI.
Search for any resources with the kubernetes-io-cluster-${CLUSTER_ID}=owned
label.
Since the cluster ID is unique to the deleted cluster, there should not be any remaining resources with that cluster ID.
In the unlikely case there are some remaining resources, delete them.