The Container Storage Interface (CSI) allows OpenShift Container Platform to consume storage from storage backends that implement the CSI interface as persistent storage.
Container Storage Interface is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/. |
OpenShift Container Platform does not ship with any CSI drivers. It is recommended to use the CSI drivers provided by community or storage vendors. OpenShift Container Platform 4.1 supports version 1.0.0 of the CSI specification. |
CSI drivers are typically shipped as container images. These containers are not aware of OpenShift Container Platform where they run. To use CSI-compatible storage backend in OpenShift Container Platform, the cluster administrator must deploy several components that serve as a bridge between OpenShift Container Platform and the storage driver.
The following diagram provides a high-level overview about the components running in pods in the OpenShift Container Platform cluster.
It is possible to run multiple CSI drivers for different storage backends. Each driver needs its own external controllers' deployment and DaemonSet with the driver and CSI registrar.
External CSI Controllers is a deployment that deploys one or more pods with three containers:
An external CSI attacher container translates attach
and detach
calls from OpenShift Container Platform to respective ControllerPublish
and
ControllerUnpublish
calls to the CSI driver.
An external CSI provisioner container that translates provision
and
delete
calls from OpenShift Container Platform to respective CreateVolume
and
DeleteVolume
calls to the CSI driver.
A CSI driver container
The CSI attacher and CSI provisioner containers communicate with the CSI driver container using UNIX Domain Sockets, ensuring that no CSI communication leaves the pod. The CSI driver is not accessible from outside of the pod.
|
The external attacher must also run for CSI drivers that do not support
third-party |
The CSI driver DaemonSet runs a pod on every node that allows OpenShift Container Platform to mount storage provided by the CSI driver to the node and use it in user workloads (pods) as persistent volumes (PVs). The pod with the CSI driver installed contains the following containers:
A CSI driver registrar, which registers the CSI driver into the
openshift-node
service running on the node. The openshift-node
process
running on the node then directly connects with the CSI driver using the
UNIX Domain Socket available on the node.
A CSI driver.
The CSI driver deployed on the node should have as few credentials to the
storage backend as possible. OpenShift Container Platform will only use the node plug-in
set of CSI calls such as NodePublish
/NodeUnpublish
and
NodeStage
/NodeUnstage
, if these calls are implemented.
Dynamic provisioning of persistent storage depends on the capabilities of the CSI driver and underlying storage backend. The provider of the CSI driver should document how to create a StorageClass in OpenShift Container Platform and the parameters available for configuration.
As seen in the OpenStack Cinder example, you can deploy this StorageClass to enable dynamic provisioning.
Create a default storage class that ensures all PVCs that do not require any special storage class are provisioned by the installed CSI driver.
# oc create -f - << EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: cinder
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: csi-cinderplugin
parameters:
EOF
The following example installs a default MySQL template without any changes to the template.
The CSI driver has been deployed.
A StorageClass has been created for dynamic provisioning.
Create the MySQL template:
# oc new-app mysql-persistent --> Deploying template "openshift/mysql-persistent" to project default ... # oc get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mysql Bound kubernetes-dynamic-pv-3271ffcb4e1811e8 1Gi RWO cinder 3s