×

Overview

This example assumes a functioning OpenShift Container Platform cluster along with Heketi and GlusterFS. All oc commands are executed on the OpenShift Container Platform master host.

This topic provides an end-to-end example of how to dynamically provision GlusterFS volumes. In this example, a simple NGINX HelloWorld application is deployed using the Red Hat Container Native Storage (CNS) solution. CNS hyper-converges GlusterFS storage by containerizing it into the OpenShift Container Platform cluster.

The Red Hat Gluster Storage Administration Guide can also provide additional information about GlusterFS.

To get started, follow the gluster-kubernetes quickstart guide for an easy Vagrant-based installation and deployment of a working OpenShift Container Platform cluster with Heketi and GlusterFS containers.

Verify the Environment and Gather Needed Information

At this point, there should be a working OpenShift Container Platform cluster deployed, and a working Heketi server with GlusterFS.

  1. Verify and view the cluster environment, including nodes and pods:

    $ oc get nodes,pods
    NAME      STATUS    AGE
    master    Ready     22h
    node0     Ready     22h
    node1     Ready     22h
    node2     Ready     22h
    NAME                               READY     STATUS              RESTARTS   AGE           1/1       Running             0          1d
    glusterfs-node0-2509304327-vpce1   1/1       Running   0          1d        192.168.10.100   node0
    glusterfs-node1-3290690057-hhq92   1/1       Running   0          1d        192.168.10.101   node1 (1)
    glusterfs-node2-4072075787-okzjv   1/1       Running   0          1d        192.168.10.102   node2
    heketi-3017632314-yyngh            1/1       Running   0          1d        10.42.0.0        node0 (2)
    1 Example of GlusterFS storage pods running. There are three in this example.
    2 Heketi server pod.
  2. If not already set in the environment, export the HEKETI_CLI_SERVER:

    $ export HEKETI_CLI_SERVER=$(oc describe svc/heketi | grep "Endpoints:" | awk '{print "http://"$2}')
  3. Identify the Heketi REST URL and server IP address:

    $ echo $HEKETI_CLI_SERVER
    http://10.42.0.0:8080
  4. Identify the Gluster endpoints that are needed to pass in as a parameter into the storage class, which is used in a later step (heketi-storage-endpoints).

    $ oc get endpoints
    NAME                       ENDPOINTS                                            AGE
    heketi                     10.42.0.0:8080                                       22h
    heketi-storage-endpoints   192.168.10.100:1,192.168.10.101:1,192.168.10.102:1   22h (1)
    kubernetes                 192.168.10.90:6443                                   23h
    1 The defined GlusterFS endpoints. In this example, they are called heketi-storage-endpoints.

By default, user_authorization is disabled. If enabled, you may need to find the rest user and rest user secret key. (This is not applicable for this example, as any values will work).

Create a Storage Class for Your GlusterFS Dynamic Provisioner

Storage classes manage and enable persistent storage in OpenShift Container Platform. Below is an example of a Storage class requesting 5GB of on-demand storage to be used with your HelloWorld application.

apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
  name: gluster-heketi  (1)
provisioner: kubernetes.io/glusterfs  (2)
 parameters:
  endpoint: "heketi-storage-endpoints"  (3)
  resturl: "http://10.42.0.0:8080"  (4)
  restuser: "joe"  (5)
  restuserkey: "My Secret Life"  (6)
1 Name of the storage class.
2 The provisioner.
3 The GlusterFS-defined endpoint (oc get endpoints).
4 Heketi REST URL, taken from Step 1 above (echo $HEKETI_CLI_SERVER).
5 Rest username. This can be any value since authorization is turned off.
6 Rest user key. This can be any value.
  1. Create the Storage Class YAML file, save it, then submit it to OpenShift Container Platform:

    $ oc create -f gluster-storage-class.yaml
    storageclass "gluster-heketi" created
  2. View the storage class:

    $ oc get storageclass
    NAME              TYPE
    gluster-heketi    kubernetes.io/glusterfs

Create a PVC to Request Storage for Your Application

  1. Create a persistent volume claim (PVC) requesting 5GB of storage.

    During that time, the Dynamic Provisioning Framework and Heketi will automatically provision a new GlusterFS volume and generate the persistent volume (PV) object:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
 name: gluster1
 annotations:
   volume.beta.kubernetes.io/storage-class: gluster-heketi  (1)
spec:
 accessModes:
  - ReadWriteOnce
 resources:
   requests:
     storage: 5Gi (2)
1 The Kubernetes storage class annotation and the name of the storage class.
2 The amount of storage requested.
  1. Create the PVC YAML file, save it, then submit it to OpenShift Container Platform:

    $ oc create -f gluster-pvc.yaml
    persistentvolumeclaim "gluster1" created
  2. View the PVC:

    $ oc get pvc
    NAME       STATUS    VOLUME                                     CAPACITY   ACCESSMODES   AGE
    gluster1   Bound     pvc-7d37c7bd-bb5b-11e6-b81e-525400d87180   5Gi        RWO           14h

    Notice that the PVC is bound to a dynamically created volume.

  3. View the persistent volume (PV):

    $ oc get pv
    NAME                                       CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM              REASON    AGE
    pvc-7d37c7bd-bb5b-11e6-b81e-525400d87180   5Gi        RWO           Delete          Bound     default/gluster1             14h

Create a NGINX Pod That Uses the PVC

At this point, you have a dynamically created GlusterFS volume, bound to a PVC. Now, you can use this claim in a pod. Create a simple NGINX pod:

apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
  labels:
    name: nginx-pod
spec:
  containers:
  - name: nginx-pod
    image: gcr.io/google_containers/nginx-slim:0.8
    ports:
    - name: web
      containerPort: 80
    securityContext:
      privileged: true
    volumeMounts:
    - name: gluster-vol1
      mountPath: /usr/share/nginx/html
  volumes:
  - name: gluster-vol1
    persistentVolumeClaim:
      claimName: gluster1 (1)
1 The name of the PVC created in the previous step.
  1. Create the Pod YAML file, save it, then submit it to OpenShift Container Platform:

    $ oc create -f nginx-pod.yaml
    pod "gluster-pod1" created
  2. View the pod:

    $ oc get pods -o wide
    NAME                               READY     STATUS    RESTARTS   AGE       IP               NODE
    nginx-pod                          1/1       Running   0          9m        10.38.0.0        node1
    glusterfs-node0-2509304327-vpce1   1/1       Running   0          1d        192.168.10.100   node0
    glusterfs-node1-3290690057-hhq92   1/1       Running   0          1d        192.168.10.101   node1
    glusterfs-node2-4072075787-okzjv   1/1       Running   0          1d        192.168.10.102   node2
    heketi-3017632314-yyngh            1/1       Running   0          1d        10.42.0.0        node0

    This may take a few minutes, as the the pod may need to download the image if it does not already exist.

  3. oc exec into the container and create an index.html file in the mountPath definition of the pod:

    $ oc exec -ti nginx-pod /bin/sh
    $ cd /usr/share/nginx/html
    $ echo 'Hello World from GlusterFS!!!' > index.html
    $ ls
    index.html
    $ exit
  4. Using the curl command from the master node, curl the URL of the pod:

    $ curl http://10.38.0.0
    Hello World from GlusterFS!!!
  5. Check your Gluster pod to ensure that the index.html file was written. Choose any of the Gluster pods:

    $ oc exec -ti glusterfs-node1-3290690057-hhq92 /bin/sh
    $ mount | grep heketi
    /dev/mapper/VolGroup00-LogVol00 on /var/lib/heketi type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
    /dev/mapper/vg_f92e09091f6b20ab12b02a2513e4ed90-brick_1e730a5462c352835055018e1874e578 on /var/lib/heketi/mounts/vg_f92e09091f6b20ab12b02a2513e4ed90/brick_1e730a5462c352835055018e1874e578 type xfs (rw,noatime,seclabel,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota)
    /dev/mapper/vg_f92e09091f6b20ab12b02a2513e4ed90-brick_d8c06e606ff4cc29ccb9d018c73ee292 on /var/lib/heketi/mounts/vg_f92e09091f6b20ab12b02a2513e4ed90/brick_d8c06e606ff4cc29ccb9d018c73ee292 type xfs (rw,noatime,seclabel,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota)
    
    $ cd /var/lib/heketi/mounts/vg_f92e09091f6b20ab12b02a2513e4ed90/brick_d8c06e606ff4cc29ccb9d018c73ee292/brick
    $ ls
    index.html
    $ cat index.html
    Hello World from GlusterFS!!!