# yum install -y glusterfs-fuse
This topic provides an end-to-end example of how to use an existing Gluster cluster as an OpenShift Enterprise persistent store. It is assumed that a working Gluster cluster is already set up. If not, consult the Red Hat Gluster Storage Administration Guide.
Persistent Storage Using GlusterFS provides an explanation of persistent volumes (PVs), persistent volume claims (PVCs), and using GlusterFS as persistent storage.
All |
The glusterfs-fuse library must be installed on all schedulable OpenShift Enterprise nodes:
# yum install -y glusterfs-fuse
The OpenShift Enterprise all-in-one host is often not used to run pod workloads and, thus, is not included as a schedulable node. |
The named endpoints define each node in the Gluster-trusted storage pool:
apiVersion: v1
kind: Endpoints
metadata:
name: gluster-endpoints (1)
subsets:
- addresses: (2)
- ip: 192.168.122.21
ports: (3)
- port: 1
protocol: TCP
- addresses:
- ip: 192.168.122.22
ports:
- port: 1
protocol: TCP
1 | The name of the endpoints is used in the PV definition below. |
2 | An array of IP addresses for each node in the Gluster pool. Currently, host names are not supported. |
3 | The port numbers are ignored, but must be legal port numbers. The value 1 is commonly used. |
Save the endpoints definition to a file, for example gluster-endpoints.yaml, then create the endpoints object:
# oc create -f gluster-endpoints.yaml endpoints "gluster-endpoints" created
Verify that the endpoints were created:
# oc get endpoints gluster-endpoints NAME ENDPOINTS AGE gluster-endpoints 192.168.122.21:1,192.168.122.22:1 1m
Next, before creating the PV object, define the persistent volume in OpenShift Enterprise:
apiVersion: v1
kind: PersistentVolume
metadata:
name: gluster-pv (1)
spec:
capacity:
storage: 1Gi (2)
accessModes:
- ReadWriteMany (3)
glusterfs: (4)
endpoints: gluster-endpoints (5)
path: /HadoopVol (6)
readOnly: false
persistentVolumeReclaimPolicy: Retain (7)
1 | The name of the PV, which is referenced in pod definitions or displayed in
various oc volume commands. |
2 | The amount of storage allocated to this volume. |
3 | accessModes are used as labels to match a PV and a PVC. They currently
do not define any form of access control. |
4 | This defines the volume type being used. In this case, the glusterfs plug-in is defined. |
5 | This references the endpoints named above. |
6 | This is the Gluster volume name, preceded by / . |
7 | A volume reclaim policy of retain indicates that the volume will be preserved after the pods accessing it terminate. |
Save the PV definition to a file, for example gluster-pv.yaml, and create the persistent volume:
# oc create -f gluster-pv.yaml persistentvolume "gluster-pv" created
Verify that the persistent volume was created:
# oc get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE gluster-pv <none> 1Gi RWX Available 37s
A persistent volume claim (PVC) specifies the desired access mode and storage capacity. Currently, based on only these two attributes, a PVC is bound to a single PV. Once a PV is bound to a PVC, that PV is essentially tied to the PVC’s project and cannot be bound to by another PVC. There is a one-to-one mapping of PVs and PVCs. However, multiple pods in the same project can use the same PVC.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gluster-claim (1)
spec:
accessModes:
- ReadWriteMany (2)
resources:
requests:
storage: 1Gi (3)
1 | The claim name is referenced by the pod under its volumes section. |
2 | As mentioned above for PVs, the accessModes do not enforce access rights,
but rather act as labels to match a PV to a PVC. |
3 | This claim will look for PVs offering 1Gi or greater capacity. |
Save the PVC definition to a file, for example gluster-claim.yaml, and create the PVC:
# oc create -f gluster-claim.yaml persistentvolumeclaim "gluster-claim" created
Verify the PVC was created and bound to the expected PV:
# oc get pvc NAME LABELS STATUS VOLUME CAPACITY ACCESSMODES AGE gluster-claim <none> Bound gluster-pv 1Gi RWX 24s (1)
1 | The claim was bound to the gluster-pv PV. |
Access is necessary to a node in the Gluster-trusted storage pool. On this node, examine the glusterfs-fuse mount:
# ls -lZ /mnt/glusterfs/ drwxrwx---. yarn hadoop system_u:object_r:fusefs_t:s0 HadoopVol # id yarn uid=592(yarn) gid=590(hadoop) groups=590(hadoop) (1) (2) (2)
1 | The owner has ID 592. |
2 | The group has ID 590. |
In order to access the HadoopVol volume, the container must match the SELinux label, and either run with a UID of 592, or with 590 in its supplemental groups. It is recommended to gain access to the volume by matching the Gluster mount’s groups, which is defined in the pod definition below.
By default, SELinux does not allow writing from a pod to a remote Gluster server. To enable writing to GlusterFS volumes with SELinux enforcing on each node, run:
# setsebool -P virt_sandbox_use_fusefs on
The |
A pod definition file or a template file can be used to define a pod. Below is a pod specification that creates a single container and mounts the Gluster volume for read-write access:
apiVersion: v1
kind: Pod
metadata:
name: gluster-pod1
labels:
name: gluster-pod1 (1)
spec:
containers:
- name: gluster-pod1
image: busybox (2)
command: ["sleep", "60000"]
volumeMounts:
- name: gluster-vol1 (3)
mountPath: /usr/share/busybox (4)
readOnly: false
securityContext:
supplementalGroups: [590] (5)
privileged: false
volumes:
- name: gluster-vol1 (3)
persistentVolumeClaim:
claimName: gluster-claim (6)
1 | The name of this pod as displayed by oc get pod . |
2 | The image run by this pod. In this case, we are telling busybox to sleep. |
3 | The name of the volume. This name must be the same in both the
containers and volumes sections. |
4 | The mount path as seen in the container. |
5 | The group ID to be assigned to the container. |
6 | The PVC that is bound to the Gluster cluster. |
Save the pod definition to a file, for example gluster-pod1.yaml, and create the pod:
# oc create -f gluster-pod1.yaml pod "gluster-pod1" created
Verify the pod was created:
# oc get pod NAME READY STATUS RESTARTS AGE gluster-pod1 1/1 Running 0 31s (1)
1 | After a minute or so, the pod will be in the Running state. |
More details are shown in the oc describe pod
command:
# oc describe pod gluster-pod1 Name: gluster-pod1 Namespace: default (1) Image(s): busybox Node: rhel7.2-dev/192.168.122.177 Start Time: Tue, 22 Mar 2016 10:55:57 -0700 Labels: name=gluster-pod1 Status: Running Reason: Message: IP: 10.1.0.2 (2) Replication Controllers: <none> Containers: gluster-pod1: Container ID: docker://acc0c80c28a5cd64b6e3f2848052ef30a21ee850d27ef5fe959d11da4e5a3f4f Image: busybox Image ID: docker://964092b7f3e54185d3f425880be0b022bfc9a706701390e0ceab527c84dea3e3 QoS Tier: cpu: BestEffort memory: BestEffort State: Running Started: Tue, 22 Mar 2016 10:56:00 -0700 Ready: True Restart Count: 0 Environment Variables: Conditions: Type Status Ready True Volumes: gluster-vol1: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: gluster-claim (3) ReadOnly: false default-token-rbi9o: Type: Secret (a secret that should populate this volume) SecretName: default-token-rbi9o Events: (4) FirstSeen LastSeen Count From SubobjectPath Reason Message ───────── ──────── ───── ──── ───────────── ────── ─────── 2m 2m 1 {scheduler } Scheduled Successfully assigned gluster-pod1 to rhel7.2-dev 2m 2m 1 {kubelet rhel7.2-dev} implicitly required container POD Pulled Container image "openshift3/ose-pod:v3.1.1.6" already present on machine 2m 2m 1 {kubelet rhel7.2-dev} implicitly required container POD Created Created with docker id d5c66b4f3aaa 2m 2m 1 {kubelet rhel7.2-dev} implicitly required container POD Started Started with docker id d5c66b4f3aaa 2m 2m 1 {kubelet rhel7.2-dev} spec.containers{gluster-pod1} Pulled Container image "busybox" already present on machine 2m 2m 1 {kubelet rhel7.2-dev} spec.containers{gluster-pod1} Created Created with docker id acc0c80c28a5 2m 2m 1 {kubelet rhel7.2-dev} spec.containers{gluster-pod1} Started Started with docker id acc0c80c28a5
1 | The project (namespace) name. |
2 | The IP address of the OpenShift Enterprise node running the pod. |
3 | The PVC name used by the pod. |
4 | The list of events resulting in the pod being launched and the Gluster volume being mounted. |
There is more internal information, including the SCC used to authorize the pod,
the pod’s user and group IDs, the SELinux label, and more shown in the oc get pod <name> -o yaml
command:
# oc get pod gluster-pod1 -o yaml apiVersion: v1 kind: Pod metadata: annotations: openshift.io/scc: restricted (1) creationTimestamp: 2016-03-22T17:55:57Z labels: name: gluster-pod1 name: gluster-pod1 namespace: default (2) resourceVersion: "511908" selflink: /api/v1/namespaces/default/pods/gluster-pod1 uid: 545068a3-f057-11e5-a8e5-5254008f071b spec: containers: - command: - sleep - "60000" image: busybox imagePullPolicy: IfNotPresent name: gluster-pod1 resources: {} securityContext: privileged: false runAsUser: 1000000000 (3) seLinuxOptions: level: s0:c1,c0 (4) terminationMessagePath: /dev/termination-log volumeMounts: - mountPath: /usr/share/busybox name: gluster-vol1 - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: default-token-rbi9o readOnly: true dnsPolicy: ClusterFirst host: rhel7.2-dev imagePullSecrets: - name: default-dockercfg-2g6go nodeName: rhel7.2-dev restartPolicy: Always securityContext: seLinuxOptions: level: s0:c1,c0 (4) supplementalGroups: - 590 (5) serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 volumes: - name: gluster-vol1 persistentVolumeClaim: claimName: gluster-claim (6) - name: default-token-rbi9o secret: secretName: default-token-rbi9o status: conditions: - lastProbeTime: null lastTransitionTime: 2016-03-22T17:56:00Z status: "True" type: Ready containerStatuses: - containerID: docker://acc0c80c28a5cd64b6e3f2848052ef30a21ee850d27ef5fe959d11da4e5a3f4f image: busybox imageID: docker://964092b7f3e54185d3f425880be0b022bfc9a706701390e0ceab527c84dea3e3 lastState: {} name: gluster-pod1 ready: true restartCount: 0 state: running: startedAt: 2016-03-22T17:56:00Z hostIP: 192.168.122.177 phase: Running podIP: 10.1.0.2 startTime: 2016-03-22T17:55:57Z
1 | The SCC used by the pod. |
2 | The project (namespace) name. |
3 | The UID of the busybox container. |
4 | The SELinux label for the container, and the default SELinux label for the entire pod, which happen to be the same here. |
5 | The supplemental group ID for the pod (all containers). |
6 | The PVC name used by the pod. |