$ oc create -f gluster-endpoints-service.yaml $ oc create -f gluster-endpoints.yaml $ oc create -f gluster-pv.yaml
Persistent volumes can be mounted to pods with the privileged security context constraint (SCC) attached.
While this topic uses GlusterFS as a sample use-case for mounting volumes onto privileged pods, it can be adapted to use any supported storage plug-in. |
An existing Gluster volume.
glusterfs-fuse installed on all hosts.
Definitions for GlusterFS:
Endpoints and services: gluster-endpoints-service.yaml and gluster-endpoints.yaml
Persistent volumes: gluster-pv.yaml
Persistent volume claims: gluster-pvc.yaml
Privileged pods: gluster-nginx-pod.yaml
A user with the
cluster-admin role binding. For this guide, that user is called admin
.
Creating the PersistentVolume makes the storage accessible to users, regardless of projects.
As the admin, create the service, endpoint object, and persistent volume:
$ oc create -f gluster-endpoints-service.yaml $ oc create -f gluster-endpoints.yaml $ oc create -f gluster-pv.yaml
Verify that the objects were created:
$ oc get svc NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE gluster-cluster 172.30.151.58 <none> 1/TCP <none> 24s
$ oc get ep NAME ENDPOINTS AGE gluster-cluster 192.168.59.102:1,192.168.59.103:1 2m
$ oc get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE gluster-default-volume <none> 2Gi RWX Available 2d
Adding a regular user to the privileged SCC (or to a group given access to the SCC) allows them to run privileged pods:
As the admin, add a user to the SCC:
$ oadm policy add-scc-to-user privileged <username>
Log in as the regular user:
$ oc login -u <username> -p <password>
Then, create a new project:
$ oc new-project <project_name>
As a regular user, create the PersistentVolumeClaim to access the volume:
$ oc create -f gluster-pvc.yaml -n <project_name>
Define your pod to access the claim:
apiVersion: v1
id: gluster-nginx-pvc
kind: Pod
metadata:
name: gluster-nginx-priv
spec:
containers:
- name: gluster-nginx-priv
image: fedora/nginx
volumeMounts:
- mountPath: /mnt/gluster (1)
name: gluster-volume-claim
securityContext:
privileged: true
volumes:
- name: gluster-volume-claim
persistentVolumeClaim:
claimName: gluster-claim (2)
1 | Volume mount within the pod. |
2 | The gluster-claim must reflect the name of the PersistentVolume. |
Upon pod creation, the mount directory is created and the volume is attached to that mount point.
As regular user, create a pod from the definition:
$ oc create -f gluster-nginx-pod.yaml
Verify that the pod created successfully:
$ oc get pods NAME READY STATUS RESTARTS AGE gluster-nginx-pod 1/1 Running 0 36m
It can take several minutes for the pod to create.
Export the pod configuration:
$ oc export pod <pod_name>
Examine the output. Check that openshift.io/scc
has the value of
privileged
:
metadata:
annotations:
openshift.io/scc: privileged
Access the pod and check that the volume is mounted:
$ oc rsh <pod_name> [root@gluster-nginx-pvc /]# mount
Examine the output for the Gluster volume:
192.168.59.102:gv0 on /mnt/gluster type fuse.gluster (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)