×

Overview

OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for Google Compute Platform (GCP) Filestore Storage.

Familiarity with persistent storage and configuring CSI volumes is recommended when working with a CSI Operator and driver.

To create CSI-provisioned PVs that mount to GCP Filestore Storage assets, you install the GCP Filestore CSI Driver Operator and the GCP Filestore CSI driver in the openshift-cluster-csi-drivers namespace.

  • The GCP Filestore CSI Driver Operator does not provide a storage class by default, but you can create one if needed. The GCP Filestore CSI Driver Operator supports dynamic volume provisioning by allowing storage volumes to be created on demand, eliminating the need for cluster administrators to pre-provision storage.

  • The GCP Filestore CSI driver enables you to create and mount GCP Filestore PVs.

OpenShift Container Platform GCP Filestore supports Workload Identity. This allows users to access Google Cloud resources using federated identities instead of a service account key. GCP Workload Identity must be enabled globally during installation, and then configured for the GCP Filestore CSI Driver Operator. For more information, see Installing the GCP Filestore CSI Driver Operator.

About CSI

Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code.

CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins.

Installing the GCP Filestore CSI Driver Operator

Preparing to install the GCP Filestore CSI Driver Operator with Workload Identity

If you are planning to use GCP Workload Identity with Google Compute Platform Filestore, you must obtain certain parameters that you will use during the installation of the GCP Filestore Container Storage Interface (CSI) Driver Operator.

Prerequisites
  • Access to the cluster as a user with the cluster-admin role.

Procedure

To prepare to install the GCP Filestore CSI Driver Operator with Workload Identity:

  1. Obtain the project number:

    1. Obtain the project ID by running the following command:

      $ export PROJECT_ID=$(oc get infrastructure/cluster -o jsonpath='{.status.platformStatus.gcp.projectID}')
    2. Obtain the project number, using the project ID, by running the following command:

      $ gcloud projects describe $PROJECT_ID --format="value(projectNumber)"
  2. Find the identity pool ID and the provider ID:

    During cluster installation, the names of these resources are provided to the Cloud Credential Operator utility (ccoctl) with the --name parameter. See "Creating GCP resources with the Cloud Credential Operator utility".

  3. Create Workload Identity resources for the GCP Filestore Operator:

    1. Create a CredentialsRequest file using the following example file:

      Example Credentials Request YAML file
      apiVersion: cloudcredential.openshift.io/v1
      kind: CredentialsRequest
      metadata:
        name: openshift-gcp-filestore-csi-driver-operator
        namespace: openshift-cloud-credential-operator
        annotations:
          include.release.openshift.io/self-managed-high-availability: "true"
          include.release.openshift.io/single-node-developer: "true"
      spec:
        serviceAccountNames:
        - gcp-filestore-csi-driver-operator
        - gcp-filestore-csi-driver-controller-sa
        secretRef:
          name: gcp-filestore-cloud-credentials
          namespace: openshift-cluster-csi-drivers
        providerSpec:
          apiVersion: cloudcredential.openshift.io/v1
      	kind: GCPProviderSpec
          predefinedRoles:
          - roles/file.editor
          - roles/resourcemanager.tagUser
          skipServiceCheck: true
    2. Use the CredentialsRequest file to create a GCP service account by running the following command:

      $ ./ccoctl gcp create-service-accounts --name=<filestore-service-account> \(1)
        --workload-identity-pool=<workload-identity-pool> \(2)
        --workload-identity-provider=<workload-identity-provider> \(3)
        --project=<project-id> \(4)
        --credentials-requests-dir=/tmp/credreq (5)
      1 <filestore-service-account> is a user-chosen name.
      2 <workload-identity-pool> comes from Step 2 above.
      3 <workload-identity-provider> comes from Step 2 above.
      4 <project-id> comes from Step 1.a above.
      5 The name of directory where the CredentialsRequest file resides.
      Example output
      2025/02/10 17:47:39 Credentials loaded from gcloud CLI defaults
      2025/02/10 17:47:42 IAM service account filestore-service-account-openshift-gcp-filestore-csi-driver-operator created
      2025/02/10 17:47:44 Unable to add predefined roles to IAM service account, retrying...
      2025/02/10 17:47:59 Updated policy bindings for IAM service account filestore-service-account-openshift-gcp-filestore-csi-driver-operator
      2025/02/10 17:47:59 Saved credentials configuration to: /tmp/install-dir/ (1)
      openshift-cluster-csi-drivers-gcp-filestore-cloud-credentials-credentials.yaml
      1 The current directory.
    3. Find the service account email of the newly created service account by running the following command:

      $ cat /tmp/install-dir/manifests/openshift-cluster-csi-drivers-gcp-filestore-cloud-credentials-credentials.yaml | yq '.data["service_account.json"]' | base64 -d | jq '.service_account_impersonation_url'
      Example output
      https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/filestore-se-openshift-g-ch8cm@openshift-gce-devel.iam.gserviceaccount.com:generateAccessToken

      In this example output, the service account email is filestore-se-openshift-g-ch8cm@openshift-gce-devel.iam.gserviceaccount.com.

Results

You now have the following parameters that you need to install the GCP Filestore CSI Driver Operator:

  • Project number - from Step 1.b

  • Pool ID - from Step 2

  • Provider ID - from Step 2

  • Service account email - from Step 3.c

Installing the GCP Filestore CSI Driver Operator

The Google Compute Platform (GCP) Filestore Container Storage Interface (CSI) Driver Operator is not installed in OpenShift Container Platform by default. Use the following procedure to install the GCP Filestore CSI Driver Operator in your cluster.

Prerequisites
  • Access to the OpenShift Container Platform web console.

  • If using GCP Workload Identity, certain GCP Workload Identity parameters are needed. See the preceding Section Preparing to install the GCP Filestore CSI Driver Operator with Workload Identity.

Procedure

To install the GCP Filestore CSI Driver Operator from the web console:

  1. Log in to the web console.

  2. Enable the Filestore API in the GCE project by running the following command:

    $ gcloud services enable file.googleapis.com  --project <my_gce_project> (1)
    1 Replace <my_gce_project> with your Google Cloud project.

    You can also do this using Google Cloud web console.

  3. Install the GCP Filestore CSI Operator:

    1. Click OperatorsOperatorHub.

    2. Locate the GCP Filestore CSI Operator by typing GCP Filestore in the filter box.

    3. Click the GCP Filestore CSI Driver Operator button.

    4. On the GCP Filestore CSI Driver Operator page, click Install.

    5. On the Install Operator page, ensure that:

      • All namespaces on the cluster (default) is selected.

      • Installed Namespace is set to openshift-cluster-csi-drivers.

        If using GCP Workload Identity, enter values for the following fields obtained from the procedure in Section Preparing to install the GCP Filestore CSI Driver Operator with Workload Identity:

      • GCP Project Number

      • GCP Pool ID

      • GCP Provider ID

      • GCP Service Account Email

    6. Click Install.

      After the installation finishes, the GCP Filestore CSI Operator is listed in the Installed Operators section of the web console.

  4. Install the GCP Filestore CSI Driver:

    1. Click administrationCustomResourceDefinitionsClusterCSIDriver.

    2. On the Instances tab, click Create ClusterCSIDriver.

      Use the following YAML file:

      apiVersion: operator.openshift.io/v1
      kind: ClusterCSIDriver
      metadata:
          name: filestore.csi.storage.gke.io
      spec:
        managementState: Managed
    3. Click Create.

    4. Wait for the following Conditions to change to a "true" status:

      • GCPFilestoreDriverCredentialsRequestControllerAvailable

      • GCPFilestoreDriverNodeServiceControllerAvailable

      • GCPFilestoreDriverControllerServiceControllerAvailable

Creating a storage class for GCP Filestore Storage

After installing the Operator, you should create a storage class for dynamic provisioning of Google Compute Platform (GCP) Filestore volumes.

Prerequisites
  • You are logged in to the running OpenShift Container Platform cluster.

Procedure

To create a storage class:

  1. Create a storage class using the following example YAML file:

    Example YAML file
    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: filestore-csi
    provisioner: filestore.csi.storage.gke.io
    parameters:
      connect-mode: DIRECT_PEERING (1)
      network: network-name (2)
    allowVolumeExpansion: true
    volumeBindingMode: WaitForFirstConsumer
    1 For a shared VPC, use the connect-mode parameter set to PRIVATE_SERVICE_ACCESS. For a non-shared VPC, the value is DIRECT_PEERING, which is the default setting.
    2 Specify the name of the GCP virtual private cloud (VPC) network where Filestore instances should be created in.
  2. Specify the name of the VPC network where Filestore instances should be created in.

    It is recommended to specify the VPC network that the Filestore instances should be created in. If no VPC network is specified, the Container Storage Interface (CSI) driver tries to create the instances in the default VPC network of the project.

    On IPI installations, the VPC network name is typically the cluster name with the suffix "-network". However, on UPI installations, the VPC network name can be any value chosen by the user.

    For a shared VPC (connect-mode = PRIVATE_SERVICE_ACCESS), the network needs to be the full VPC name. For example: projects/shared-vpc-name/global/networks/gcp-filestore-network.

    You can find out the VPC network name by inspecting the MachineSets objects with the following command:

    $ oc -n openshift-machine-api get machinesets -o yaml | grep "network:"
                - network: gcp-filestore-network
    (...)

    In this example, the VPC network name in this cluster is "gcp-filestore-network".

Destroying clusters and GCP Filestore

Typically, if you destroy a cluster, the OpenShift Container Platform installer deletes all of the cloud resources that belong to that cluster. However, due to the special nature of the Google Compute Platform (GCP) Filestore resources, the automated cleanup process might not remove all of them in some rare cases.

Therefore, Red Hat recommends that you verify that all cluster-owned Filestore resources are deleted by the uninstall process.

Procedure

To ensure that all GCP Filestore PVCs have been deleted:

  1. Access your Google Cloud account using the GUI or CLI.

  2. Search for any resources with the kubernetes-io-cluster-${CLUSTER_ID}=owned label.

    Since the cluster ID is unique to the deleted cluster, there should not be any remaining resources with that cluster ID.

  3. In the unlikely case there are some remaining resources, delete them.