apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0001 (1)
spec:
capacity:
storage: 5Gi (2)
accessModes:
- ReadWriteOnce (3)
nfs: (4)
path: /tmp (5)
server: 172.17.0.2 (6)
persistentVolumeReclaimPolicy: Retain (7)
OpenShift Container Platform clusters can be provisioned with persistent storage using NFS. Persistent volumes (PVs) and persistent volume claims (PVCs) provide a convenient method for sharing a volume across a project. While the NFS-specific information contained in a PV definition could also be defined directly in a Pod definition, doing so does not create the volume as a distinct cluster resource, making the volume more susceptible to conflicts.
Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. To provision NFS volumes, a list of NFS servers and export paths are all that is required.
Create an object definition for the PV:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0001 (1)
spec:
capacity:
storage: 5Gi (2)
accessModes:
- ReadWriteOnce (3)
nfs: (4)
path: /tmp (5)
server: 172.17.0.2 (6)
persistentVolumeReclaimPolicy: Retain (7)
1 | The name of the volume. This is the PV identity in various oc <command>
pod commands. |
2 | The amount of storage allocated to this volume. |
3 | Though this appears to be related to controlling access to the volume,
it is actually used similarly to labels and used to match a PVC to a PV.
Currently, no access rules are enforced based on the accessModes . |
4 | The volume type being used, in this case the nfs plug-in. |
5 | The path that is exported by the NFS server. |
6 | The host name or IP address of the NFS server. |
7 | The reclaim policy for the PV. This defines what happens to a volume when released. |
Each NFS volume must be mountable by all schedulable nodes in the cluster. |
Verify that the PV was created:
$ oc get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE pv0001 <none> 5Gi RWO Available 31s
Create a persistent volume claim that binds to the new PV:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-claim1
spec:
accessModes:
- ReadWriteOnce (1)
resources:
requests:
storage: 5Gi (2)
1 | As mentioned above for PVs, the accessModes do not enforce security, but
rather act as labels to match a PV to a PVC. |
2 | This claim looks for PVs offering 5Gi or greater capacity. |
Verify that the persistent volume claim was created:
$ oc get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nfs-claim1 Bound pv0001 5Gi RWO gp2 2m
You can use disk partitions to enforce disk quotas and size constraints. Each partition can be its own export. Each export is one PV. OpenShift Container Platform enforces unique names for PVs, but the uniqueness of the NFS volume’s server and path is up to the administrator.
Enforcing quotas in this way allows the developer to request persistent storage by a specific amount, such as 10Gi, and be matched with a corresponding volume of equal or greater capacity.
This section covers NFS volume security, including matching permissions and SELinux considerations. The user is expected to understand the basics of POSIX permissions, process UIDs, supplemental groups, and SELinux.
Developers request NFS storage by referencing either a PVC by name or the
NFS volume plug-in directly in the volumes
section of their Pod
definition.
The /etc/exports
file on the NFS server contains the accessible NFS
directories. The target NFS directory has POSIX owner and group IDs. The
OpenShift Container Platform NFS plug-in mounts the container’s NFS directory with the
same POSIX ownership and permissions found on the exported NFS directory.
However, the container is not run with its effective UID equal to the
owner of the NFS mount, which is the desired behavior.
As an example, if the target NFS directory appears on the NFS server as:
$ ls -lZ /opt/nfs -d drwxrws---. nfsnobody 5555 unconfined_u:object_r:usr_t:s0 /opt/nfs $ id nfsnobody uid=65534(nfsnobody) gid=65534(nfsnobody) groups=65534(nfsnobody)
Then the container must match SELinux labels, and either run with a UID of
65534
, the nfsnobody
owner, or with 5555
in its supplemental groups
in order to access the directory.
The owner ID of |
The recommended way to handle NFS access, assuming it is not an option to
change permissions on the NFS export, is to use supplemental groups.
Supplemental groups in OpenShift Container Platform are used for shared storage, of
which NFS is an example. In contrast block storage, such as
iSCSI, use the fsGroup
SCC strategy and the fsGroup
value in the
Pod’s securityContext
.
To gain access to persistent storage, it is generally preferable to use supplemental group IDs versus user IDs. |
Because the group ID on the example target NFS directory
is 5555
, the Pod can define that group ID using supplementalGroups
under the Pod’s securityContext
definition. For example:
spec:
containers:
- name:
...
securityContext: (1)
supplementalGroups: [5555] (2)
1 | securityContext must be defined at the Pod level, not under a
specific container. |
2 | An array of GIDs defined for the Pod. In this case, there is one element in the array. Additional GIDs would be comma-separated. |
Assuming there are no custom SCCs that might satisfy the Pod’s
requirements, the Pod likely matches the restricted
SCC. This SCC has
the supplementalGroups
strategy set to RunAsAny
, meaning that any
supplied group ID is accepted without range checking.
As a result, the above Pod passes admissions and is launched. However,
if group ID range checking is desired, a custom SCC is the preferred
solution. A custom SCC can be created such that minimum
and maximum group IDs are defined, group ID range checking is enforced,
and a group ID of 5555
is allowed.
To use a custom SCC, you must first add it to the appropriate service
account. For example, use the |
User IDs can be defined in the container image or in the Pod definition.
It is generally preferable to use supplemental group IDs to gain access to persistent storage versus using user IDs. |
In the example target NFS directory shown above, the container
needs its UID set to 65534
, ignoring group IDs for the moment, so the
following can be added to the Pod definition:
spec:
containers: (1)
- name:
...
securityContext:
runAsUser: 65534 (2)
1 | Pods contain a securityContext specific to each container and
a Pod’s securityContext which applies to all containers defined in
the Pod. |
2 | 65534 is the nfsnobody user. |
Assuming the default
project and the restricted
SCC, the Pod’s requested
user ID of 65534
is not allowed, and therefore the Pod fails. The
Pod fails for the following reasons:
It requests 65534
as its user ID.
All SCCs available to the Pod are examined to see which SCC allows a
user ID of 65534
. While all policies of the SCCs are checked, the focus
here is on user ID.
Because all available SCCs use MustRunAsRange
for their runAsUser
strategy, UID range checking is required.
65534
is not included in the SCC or project’s user ID range.
It is generally considered a good practice not to modify the predefined
SCCs. The preferred way to fix this situation is to create a custom SCC
A custom SCC can be created such that minimum and maximum user IDs
are defined, UID range checking is still enforced, and the UID of 65534
is allowed.
To use a custom SCC, you must first add it to the appropriate service
account. For example, use the |
By default, SELinux does not allow writing from a Pod to a remote NFS server. The NFS volume mounts correctly, but is read-only.
To enable writing to a remote NFS server, follow the below procedure.
The container-selinux
package must be installed. This package provides
the virt_use_nfs
SELinux boolean.
Enable the virt_use_nfs
boolean using the following command.
The -P
option makes this boolean persistent across reboots.
# setsebool -P virt_use_nfs 1
In order to enable arbitrary container users to read and write the volume, each exported volume on the NFS server should conform to the following conditions:
Every export must be exported using the following format:
/<example_fs> *(rw,root_squash)
The firewall must be configured to allow traffic to the mount point.
For NFSv4, configure the default port 2049
(nfs).
# iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT
For NFSv3, there are three ports to configure:
2049
(nfs), 20048
(mountd), and 111
(portmapper).
# iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT # iptables -I INPUT 1 -p tcp --dport 20048 -j ACCEPT # iptables -I INPUT 1 -p tcp --dport 111 -j ACCEPT
The NFS export and directory must be set up so that they are accessible
by the target Pods. Either set the export to be owned by the container’s
primary UID, or supply the Pod group access using supplementalGroups
,
as shown in group IDs above.
NFS implements the OpenShift Container Platform Recyclable
plug-in interface. Automatic
processes handle reclamation tasks based on policies set on each persistent
volume.
By default, PVs are set to Retain
.
Once claim to a PVC is deleted, and the PV is released, the PV object should not be reused. Instead, a new PV should be created with the same basic volume details as the original.
For example, the administrator creates a PV named nfs1
:
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs1
spec:
capacity:
storage: 1Mi
accessModes:
- ReadWriteMany
nfs:
server: 192.168.1.1
path: "/"
The user creates PVC1
, which binds to nfs1
. The user then deletes
PVC1
, releasing claim to nfs1
. This results in nfs1
being Released
.
If the administrator wants to make the same NFS share available,
they should create a new PV with the same NFS server details, but a
different PV name:
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs2
spec:
capacity:
storage: 1Mi
accessModes:
- ReadWriteMany
nfs:
server: 192.168.1.1
path: "/"
Deleting the original PV and re-creating it with the same name is
discouraged. Attempting to manually change the status of a PV
from Released
to Available
causes errors and potential data loss.
Depending on what version of NFS is being used and how it is configured, there may be additional configuration steps needed for proper export and security mapping. The following are some that may apply:
NFSv4 mount incorrectly shows all files with ownership of |
|
Disabling ID mapping on NFSv4 |
|