apiVersion: aws-efs.managed.openshift.io/v1alpha1
kind: SharedVolume
metadata:
name: sv1
namespace: efsop2
spec:
accessPointID: fsap-0123456789abcdef
fileSystemID: fs-0123cdef
The Amazon Web Services Elastic File System (AWS EFS) is a Network File System (NFS) that can be provisioned on OpenShift Dedicated clusters. AWS also provides and supports a CSI EFS Driver to be used with Kubernetes that allows Kubernetes workloads to leverage this shared file storage.
This document describes the basic steps needed to set up your AWS account to prepare EFS to be used by OpenShift Dedicated. For more information about AWS EFS, see the AWS EFS documentation.
Red Hat does not provide official support for this feature, including backup and recovery. The customer is responsible for backing up the EFS data and recovering it in the event of an outage or data loss. |
The high-level process to enable EFS on a cluster is:
Create an AWS EFS in the AWS account used by the cluster.
Install the AWS EFS Operator from OperatorHub.
Create SharedVolume
custom resources.
Use the generated persistent volume claims in pod spec.volumes
.
Customer Cloud Subscription (CCS) for an OpenShift Dedicated cluster
Administrator access to the AWS account of that cluster
Set up your AWS account to prepare AWS EFS for use by OpenShift Dedicated.
Log in to the AWS EC2 Console.
Select the region that matches the cluster region.
Filter only worker EC2 instances, and select an instance. Note the VPC ID and security group ID. These values are required later in the process.
Click the Security tab, and click the Security Group Name.
From the Actions dropdown menu, click Edit Inbound Rules. Scroll to the bottom, and click Add Rule.
Add an NFS rule that allows NFS traffic from the VPC private CIDR.
Open the Amazon EFS page. To create the EFS, click Create file system.
Click Customize and proceed through the wizard.
In Step 2:
, configure the network access:
Click the VPC of the cluster that you noted previously.
Ensure that the private subnets are selected.
Select the Security Group Name that you noted previously for the EC2 worker instances.
Click Next.
In Step 3:
, configure the client access:
Click Add access point.
Enter a unique Path such as /access_point_1
.
Configure the Owner fields with ownership or permissions that allow write access for your worker pods. For example, if your worker pods run with group ID 100
, you can set that ID as your Owner Group ID
and ensure the permissions include g+rwx
.
Continue through the wizard steps, and click Create File System.
After the file system is created:
Note the file system ID for later use.
Click Manage client access and note the access point ID.
You can add more NFS rules, using steps 5-10, to create separate shared data stores. In each case, make note of the corresponding file system ID and access point ID.
Log in to the OpenShift Web UI for your cluster.
Click Operators → OperatorHub.
Search for and select the AWS EFS Operator. Click Install.
Accept the default settings, and click Subscribe.
SharedVolume
resources using the consoleYou must create one SharedVolume
resource per file system:access point pair in each project from which you want pods to access it.
In the OpenShift web console, create and navigate to a project.
Click Operators → Installed Operators. Find the entry for AWS EFS Operator, and click SharedVolume under Provided APIs.
Click Create SharedVolume.
Edit the sample YAML:
Type a suitable value for name
.
Replace the values of accessPointID
and fileSystemID
with the values from the EFS resources you created earlier.
apiVersion: aws-efs.managed.openshift.io/v1alpha1
kind: SharedVolume
metadata:
name: sv1
namespace: efsop2
spec:
accessPointID: fsap-0123456789abcdef
fileSystemID: fs-0123cdef
Click Create.
The SharedVolume
resource is created, and triggers the AWS EFS Operator to generate and associate a PersistentVolume:PersistentVolumeClaim pair with the specified EFS access point.
To verify that the persistent volume claim (PVC) exists and is bound, click Storage → Persistent Volume Claims.
The PVC name is pvc-<shared_volume_name>
. The associated PV name is pv-<project_name>-<shared_volume_name>
.
SharedVolume
resources using the CLIYou must create one SharedVolume
resource per file system:access point pair in each project from which you want pods to access it. You can create a SharedVolume manually by entering YAML or JSON definitions, or by dragging and dropping a file into an editor.
Using the oc
CLI, create the YAML file using the accessPointID
and fileSystemID
values from the EFS resources you created earlier.
apiVersion: aws-efs.managed.openshift.io/v1alpha1
kind: SharedVolume
metadata:
name: sv1
namespace: efsop2
spec:
accessPointID: fsap-0123456789abcdef
fileSystemID: fs-0123cdef
Apply the file to the cluster using the following command:
$ oc apply -f <filename>.yaml
The SharedVolume
resource is created, and triggers the AWS EFS Operator to generate and associate a PersistentVolume:PersistentVolumeClaim pair with the specified EFS access point.
To verify that the PVC exists and is bound, navigate to Storage > Persistent Volume Claims.
The PVC name is pvc-{shared_volume_name}
. The associated PV name is pv-{project_name}-{shared_volume_name}
.
The persistent volume claim (PVC) that was created in your project is ready for use. You can create a sample pod to test this PVC.
Create and navigate to a project.
Click Workloads → Pods → Create Pod.
Enter the YAML information. Use the name of your PersistentVolumeClaim
object under .spec.volumes[].persistentVolumeClaim.claimName
.
apiVersion: v1
kind: Pod
metadata:
name: test-efs
spec:
volumes:
- name: efs-storage-vol
persistentVolumeClaim:
claimName: pvc-sv1
containers:
- name: test-efs
image: centos:latest
command: [ "/bin/bash", "-c", "--" ]
args: [ "while true; do touch /mnt/efs-data/verify-efs && echo 'hello efs' && sleep 30; done;" ]
volumeMounts:
- mountPath: "/mnt/efs-data"
name: efs-storage-vol
After the pods are created, click Workloads → Pods → Logs to verify the pod logs.
To remove the Operator from your cluster:
Delete all of the workloads using the persistent volume claims that were generated by the Operator.
Delete all of the shared volumes from all of the namespaces. The Operator automatically removes the associated persistent volumes and persistent volume claims.
Uninstall the Operator:
Click Operators → Installed Operators.
Find the entry for AWS EFS Operator, and click the menu button on the right-hand side of the Operator.
Click Uninstall and confirm the deletion.
Delete the shared volume CRD. This action triggers the deletion of the remaining Operator-owned resources.