The Amazon Web Services Elastic File System (AWS EFS) is a Network File System (NFS) that can be provisioned on Red Hat OpenShift Service on AWS clusters. AWS also provides and supports a CSI EFS Driver to be used with Kubernetes that allows Kubernetes workloads to leverage this shared file storage.

This document describes the basic steps needed to set up your AWS account to prepare EFS to be used by Red Hat OpenShift Service on AWS. For more information about AWS AFS, see the AWS EFS documentation.

Red Hat does not provide official support for this feature, including backup and recovery. The customer is responsible for backing up the EFS data and recovering it in the event of an outage or data loss.

The high-level process to enable EFS on a cluster is:

  1. Create an AWS EFS in the AWS account used by the cluster.

  2. Install the AWS EFS Operator from OperatorHub.

  3. Create SharedVolume custom resources.

  4. Use the generated persistent volume claims in pod spec.volumes.

Prerequisites

  • A Red Hat OpenShift Service on AWS cluster

  • Administrator access to the AWS account of that cluster

Configuring the AWS account

Set up your AWS account to prepare AWS EFS for use by Red Hat OpenShift Service on AWS.

Procedure
  1. Log in to the AWS EC2 Console.

  2. Select the region that matches the cluster region.

  3. Filter only worker EC2 instances, and select an instance. Note the VPC ID and security group ID. These values are required later in the process.

  4. Click the Security tab, and click the Security Group Name.

  5. From the Actions dropdown menu, click Edit Inbound Rules. Scroll to the bottom, and click Add Rule.

  6. Add an NFS rule that allows NFS traffic from the VPC private CIDR.

  7. Open the Amazon EFS page. To create the EFS, click Create file system.

  8. Click Customize and proceed through the wizard.

    1. In Step 2:, configure the network access:

      1. Click the VPC of the cluster that you noted previously.

      2. Ensure that the private subnets are selected.

      3. Select the Security Group Name that you noted previously for the EC2 worker instances.

      4. Click Next.

    2. In Step 3:, configure the client access:

      1. Click Add access point.

      2. Enter a unique Path such as /access_point_1.

      3. Configure the Owner fields with ownership or permissions that allow write access for your worker pods. For example, if your worker pods run with group ID 100, you can set that ID as your Owner Group ID and ensure the permissions include g+rwx.

  9. Continue through the wizard steps, and click Create File System.

  10. After the file system is created:

    1. Note the file system ID for later use.

    2. Click Manage client access and note the access point ID.

You can add more NFS rules, using steps 5-10, to create separate shared data stores. In each case, make note of the corresponding file system ID and access point ID.

Installing the EFS Operator

Procedure
  1. Log in to the OpenShift Web UI for your cluster.

  2. Click OperatorsOperatorHub.

  3. Search for and select the AWS EFS Operator. Click Install.

  4. Accept the default settings, and click Subscribe.

Creating SharedVolume resources using the console

You must create one SharedVolume resource per file system:access point pair in each project from which you want pods to access it.

Procedure
  1. In the OpenShift web console, create and navigate to a project.

  2. Click OperatorsInstalled Operators. Find the entry for AWS EFS Operator, and click SharedVolume under Provided APIs.

  3. Click Create SharedVolume.

  4. Edit the sample YAML:

    1. Type a suitable value for name.

    2. Replace the values of accessPointID and fileSystemID with the values from the EFS resources you created earlier.

        apiVersion: aws-efs.managed.openshift.io/v1alpha1
        kind: SharedVolume
        metadata:
          name: sv1
          namespace: efsop2
        spec:
          accessPointID: fsap-0123456789abcdef
          fileSystemID: fs-0123cdef
  5. Click Create.

    The SharedVolume resource is created, and triggers the AWS EFS Operator to generate and associate a PersistentVolume:PersistentVolumeClaim pair with the specified EFS access point.

  6. To verify that the persistent volume claim (PVC) exists and is bound, click StoragePersistent Volume Claims.

    The PVC name is pvc-<shared_volume_name>. The associated PV name is pv-<project_name>-<shared_volume_name>.

Creating SharedVolume resources using the CLI

You must create one SharedVolume resource per file system:access point pair in each project from which you want pods to access it. You can create a SharedVolume manually by entering YAML or JSON definitions, or by dragging and dropping a file into an editor.

Procedure
  1. Using the oc CLI, create the YAML file using the accessPointID and fileSystemID values from the EFS resources you created earlier.

      apiVersion: aws-efs.managed.openshift.io/v1alpha1
      kind: SharedVolume
      metadata:
        name: sv1
        namespace: efsop2
      spec:
        accessPointID: fsap-0123456789abcdef
        fileSystemID: fs-0123cdef
  2. Apply the file to the cluster using the following command:

    $ oc apply -f <filename>.yaml

    The SharedVolume resource is created, and triggers the AWS EFS Operator to generate and associate a PersistentVolume:PersistentVolumeClaim pair with the specified EFS access point.

  3. To verify that the PVC exists and is bound, navigate to Storage > Persistent Volume Claims.

    The PVC name is pvc-{shared_volume_name}. The associated PV name is pv-{project_name}-{shared_volume_name}.

Connecting pods

The persistent volume claim (PVC) that was created in your project is ready for use. You can create a sample pod to test this PVC.

Procedure
  1. Create and navigate to a project.

  2. Click WorkloadsPodsCreate Pod.

  3. Enter the YAML information. Use the name of your PersistentVolumeClaim object under .spec.volumes[].persistentVolumeClaim.claimName.

    Example
    apiVersion: v1
    kind: Pod
    metadata:
     name: test-efs
    spec:
     volumes:
       - name: efs-storage-vol
         persistentVolumeClaim:
           claimName: pvc-sv1
     containers:
       - name: test-efs
         image: centos:latest
         command: [ "/bin/bash", "-c", "--" ]
         args: [ "while true; do touch /mnt/efs-data/verify-efs && echo 'hello efs' && sleep 30; done;" ]
         volumeMounts:
           - mountPath: "/mnt/efs-data"
             name: efs-storage-vol
  4. After the pods are created, click WorkloadsPodsLogs to verify the pod logs.

Uninstalling the EFS Operator

Procedure

To remove the Operator from your cluster:

  1. Delete all of the workloads using the persistent volume claims that were generated by the Operator.

  2. Delete all of the shared volumes from all of the namespaces. The Operator automatically removes the associated persistent volumes and persistent volume claims.

  3. Uninstall the Operator:

    1. Click OperatorsInstalled Operators.

    2. Find the entry for AWS EFS Operator, and click the menu button on the right-hand side of the Operator.

    3. Click Uninstall and confirm the deletion.

  4. Delete the shared volume CRD. This action triggers the deletion of the remaining Operator-owned resources.