# yum install -y ceph-common
This topic provides an end-to-end example of using an existing Ceph cluster as an OpenShift Container Platform persistent store. It is assumed that a working Ceph cluster is already set up. If not, consult the Overview of Red Hat Ceph Storage.
Persistent Storage Using Ceph Rados Block Device provides an explanation of persistent volumes (PVs), persistent volume claims (PVCs), and using Ceph RBD as persistent storage.
All |
The ceph-common library must be installed on all schedulable OpenShift Container Platform nodes:
The OpenShift Container Platform all-in-one host is not often used to run pod workloads and, thus, is not included as a schedulable node. |
# yum install -y ceph-common
The ceph auth get-key
command is run on a Ceph MON node to display the key
value for the client.admin user:
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret
data:
key: QVFBOFF2SlZheUJQRVJBQWgvS2cwT1laQUhPQno3akZwekxxdGc9PQ== (1)
1 | This base64 key is generated on one of the Ceph MON nodes using the ceph auth get-key client.admin | base64 command, then copying the output and pasting it as the secret key’s value. |
Save the secret definition to a file, for example ceph-secret.yaml, then create the secret:
$ oc create -f ceph-secret.yaml secret "ceph-secret" created
Verify that the secret was created:
# oc get secret ceph-secret NAME TYPE DATA AGE ceph-secret Opaque 1 23d
Next, before creating the PV object in OpenShift Container Platform, define the persistent volume file:
apiVersion: v1
kind: PersistentVolume
metadata:
name: ceph-pv (1)
spec:
capacity:
storage: 2Gi (2)
accessModes:
- ReadWriteOnce (3)
rbd: (4)
monitors: (5)
- 192.168.122.133:6789
pool: rbd
image: ceph-image
user: admin
secretRef:
name: ceph-secret (6)
fsType: ext4 (7)
readOnly: false
persistentVolumeReclaimPolicy: Recycle
1 | The name of the PV, which is referenced in pod definitions or displayed in
various oc volume commands. |
2 | The amount of storage allocated to this volume. |
3 | accessModes are used as labels to match a PV and a PVC. They currently
do not define any form of access control. All block storage is defined to be
single user (non-shared storage). |
4 | This defines the volume type being used. In this case, the rbd plug-in is defined. |
5 | This is an array of Ceph monitor IP addresses and ports. |
6 | This is the Ceph secret, defined above. It is used to create a secure connection from OpenShift Container Platform to the Ceph server. |
7 | This is the file system type mounted on the Ceph RBD block device. |
Save the PV definition to a file, for example ceph-pv.yaml, and create the persistent volume:
# oc create -f ceph-pv.yaml persistentvolume "ceph-pv" created
Verify that the persistent volume was created:
# oc get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE ceph-pv <none> 2147483648 RWO Available 2s
A persistent volume claim (PVC) specifies the desired access mode and storage capacity. Currently, based on only these two attributes, a PVC is bound to a single PV. Once a PV is bound to a PVC, that PV is essentially tied to the PVC’s project and cannot be bound to by another PVC. There is a one-to-one mapping of PVs and PVCs. However, multiple pods in the same project can use the same PVC.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ceph-claim
spec:
accessModes: (1)
- ReadWriteOnce
resources:
requests:
storage: 2Gi (2)
1 | As mentioned above for PVs, the accessModes do not enforce access right,
but rather act as labels to match a PV to a PVC. |
2 | This claim will look for PVs offering 2Gi or greater capacity. |
Save the PVC definition to a file, for example ceph-claim.yaml, and create the PVC:
# oc create -f ceph-claim.yaml persistentvolumeclaim "ceph-claim" created #and verify the PVC was created and bound to the expected PV: # oc get pvc NAME LABELS STATUS VOLUME CAPACITY ACCESSMODES AGE ceph-claim <none> Bound ceph-pv 1Gi RWX 21s (1)
1 | the claim was bound to the ceph-pv PV. |
A pod definition file or a template file can be used to define a pod. Below is a pod specification that creates a single container and mounts the Ceph RBD volume for read-write access:
apiVersion: v1
kind: Pod
metadata:
name: ceph-pod1 (1)
spec:
containers:
- name: ceph-busybox
image: busybox (2)
command: ["sleep", "60000"]
volumeMounts:
- name: ceph-vol1 (3)
mountPath: /usr/share/busybox (4)
readOnly: false
volumes:
- name: ceph-vol1 (3)
persistentVolumeClaim:
claimName: ceph-claim (5)
1 | The name of this pod as displayed by oc get pod . |
2 | The image run by this pod. In this case, we are telling busybox to sleep. |
3 | The name of the volume. This name must be the same in both the containers and volumes sections. |
4 | The mount path as seen in the container. |
5 | The PVC that is bound to the Ceph RBD cluster. |
Save the pod definition to a file, for example ceph-pod1.yaml, and create the pod:
# oc create -f ceph-pod1.yaml pod "ceph-pod1" created #verify pod was created # oc get pod NAME READY STATUS RESTARTS AGE ceph-pod1 1/1 Running 0 2m (1)
1 | After a minute or so, the pod will be in the Running state. |
When using block storage, such as Ceph RBD, the physical block storage is
managed by the pod. The group ID defined in the pod becomes the group ID of
both the Ceph RBD mount inside the container, and the group ID of the actual
storage itself. Thus, it is usually unnecessary to define a group ID in the pod
specifiation. However, if a group ID is desired, it can be defined using
fsGroup
, as shown in the following pod definition fragment:
...
spec:
containers:
- name:
...
securityContext: (1)
fsGroup: 7777 (2)
...
1 | securityContext must be defined at the pod level, not under a specific container. |
2 | All containers in the pod will have the same fsGroup ID. |