×

Overview

OpenShift Dedicated is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for Google Compute Platform (GCP) Filestore Storage.

Familiarity with persistent storage and configuring CSI volumes is recommended when working with a CSI Operator and driver.

To create CSI-provisioned PVs that mount to GCP Filestore Storage assets, you install the GCP Filestore CSI Driver Operator and the GCP Filestore CSI driver in the openshift-cluster-csi-drivers namespace.

  • The GCP Filestore CSI Driver Operator does not provide a storage class by default, but you can create one if needed. The GCP Filestore CSI Driver Operator supports dynamic volume provisioning by allowing storage volumes to be created on demand, eliminating the need for cluster administrators to pre-provision storage.

  • The GCP Filestore CSI driver enables you to create and mount GCP Filestore PVs.

About CSI

Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code.

CSI Operators give OpenShift Dedicated users storage options, such as volume snapshots, that are not possible with in-tree volume plugins.

Installing the GCP Filestore CSI Driver Operator

The Google Compute Platform (GCP) Filestore Container Storage Interface (CSI) Driver Operator is not installed in OpenShift Dedicated by default. Use the following procedure to install the GCP Filestore CSI Driver Operator in your cluster.

Prerequisites
  • Access to the OpenShift Dedicated web console.

Procedure

To install the GCP Filestore CSI Driver Operator from the web console:

  1. Log in to the OpenShift Cluster Manager.

  2. Select your cluster.

  3. Click Open console and log in with your credentials.

  4. Enable the Filestore API in the GCE project by running the following command:

    $ gcloud services enable file.googleapis.com  --project <my_gce_project> (1)
    1 Replace <my_gce_project> with your Google Cloud project.

    You can also do this using Google Cloud web console.

  5. Install the GCP Filestore CSI Operator:

    1. Click OperatorsOperatorHub.

    2. Locate the GCP Filestore CSI Operator by typing GCP Filestore in the filter box.

    3. Click the GCP Filestore CSI Driver Operator button.

    4. On the GCP Filestore CSI Driver Operator page, click Install.

    5. On the Install Operator page, ensure that:

      • All namespaces on the cluster (default) is selected.

      • Installed Namespace is set to openshift-cluster-csi-drivers.

    6. Click Install.

      After the installation finishes, the GCP Filestore CSI Operator is listed in the Installed Operators section of the web console.

  6. Install the GCP Filestore CSI Driver:

    1. Click administrationCustomResourceDefinitionsClusterCSIDriver.

    2. On the Instances tab, click Create ClusterCSIDriver.

      Use the following YAML file:

      apiVersion: operator.openshift.io/v1
      kind: ClusterCSIDriver
      metadata:
          name: filestore.csi.storage.gke.io
      spec:
        managementState: Managed
    3. Click Create.

    4. Wait for the following Conditions to change to a "true" status:

      • GCPFilestoreDriverCredentialsRequestControllerAvailable

      • GCPFilestoreDriverNodeServiceControllerAvailable

      • GCPFilestoreDriverControllerServiceControllerAvailable

Creating a storage class for GCP Filestore Storage

After installing the Operator, you should create a storage class for dynamic provisioning of Google Compute Platform (GCP) Filestore volumes.

Prerequisites
  • You are logged in to the running OpenShift Dedicated cluster.

Procedure

To create a storage class:

  1. Create a storage class using the following example YAML file:

    Example YAML file
    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: filestore-csi
    provisioner: filestore.csi.storage.gke.io
    parameters:
      connect-mode: DIRECT_PEERING (1)
      network: network-name (2)
    allowVolumeExpansion: true
    volumeBindingMode: WaitForFirstConsumer
    1 For a shared VPC, use the connect-mode parameter set to PRIVATE_SERVICE_ACCESS. For a non-shared VPC, the value is DIRECT_PEERING, which is the default setting.
    2 Specify the name of the GCP virtual private cloud (VPC) network where Filestore instances should be created in.
  2. Specify the name of the VPC network where Filestore instances should be created in.

    It is recommended to specify the VPC network that the Filestore instances should be created in. If no VPC network is specified, the Container Storage Interface (CSI) driver tries to create the instances in the default VPC network of the project.

    On IPI installations, the VPC network name is typically the cluster name with the suffix "-network". However, on UPI installations, the VPC network name can be any value chosen by the user.

    For a shared VPC (connect-mode = PRIVATE_SERVICE_ACCESS), the network needs to be the full VPC name. For example: projects/shared-vpc-name/global/networks/gcp-filestore-network.

    You can find out the VPC network name by inspecting the MachineSets objects with the following command:

    $ oc -n openshift-machine-api get machinesets -o yaml | grep "network:"
                - network: gcp-filestore-network
    (...)

    In this example, the VPC network name in this cluster is "gcp-filestore-network".

Destroying clusters and GCP Filestore

Typically, if you destroy a cluster, the OpenShift Dedicated installer deletes all of the cloud resources that belong to that cluster. However, due to the special nature of the Google Compute Platform (GCP) Filestore resources, the automated cleanup process might not remove all of them in some rare cases.

Therefore, Red Hat recommends that you verify that all cluster-owned Filestore resources are deleted by the uninstall process.

Procedure

To ensure that all GCP Filestore PVCs have been deleted:

  1. Access your Google Cloud account using the GUI or CLI.

  2. Search for any resources with the kubernetes-io-cluster-${CLUSTER_ID}=owned label.

    Since the cluster ID is unique to the deleted cluster, there should not be any remaining resources with that cluster ID.

  3. In the unlikely case there are some remaining resources, delete them.

Additional resources