kind: StorageClass (1)
apiVersion: storage.k8s.io/v1 (2)
metadata:
name: gp2 (3)
annotations: (4)
storageclass.kubernetes.io/is-default-class: 'true'
...
provisioner: kubernetes.io/aws-ebs (5)
parameters: (6)
type: gp2
...
After installing OpenShift Container Platform, you can further expand and customize your cluster to your requirements, including storage configuration.
The StorageClass
resource object describes and classifies storage that can
be requested, as well as provides a means for passing parameters for
dynamically provisioned storage on demand. StorageClass
objects can also
serve as a management mechanism for controlling different levels of
storage and access to the storage. Cluster Administrators (cluster-admin
)
or Storage Administrators (storage-admin
) define and create the
StorageClass
objects that users can request without needing any detailed
knowledge about the underlying storage volume sources.
The OpenShift Container Platform persistent volume framework enables this functionality and allows administrators to provision a cluster with persistent storage. The framework also gives users a way to request those resources without having any knowledge of the underlying infrastructure.
Many storage types are available for use as persistent volumes in OpenShift Container Platform. While all of them can be statically provisioned by an administrator, some types of storage are created dynamically using the built-in provider and plug-in APIs.
OpenShift Container Platform provides the following provisioner plug-ins, which have generic implementations for dynamic provisioning that use the cluster’s configured provider’s API to create new storage resources:
Storage type | Provisioner plug-in name | Notes |
---|---|---|
Red Hat OpenStack Platform (RHOSP) Cinder |
|
|
RHOSP Manila Container Storage Interface (CSI) |
|
Once installed, the OpenStack Manila CSI Driver Operator and ManilaDriver automatically create the required storage classes for all available Manila share types needed for dynamic provisioning. |
AWS Elastic Block Store (EBS) |
|
For dynamic provisioning when using multiple clusters in different zones,
tag each node with |
Azure Disk |
|
|
Azure File |
|
The |
GCE Persistent Disk (gcePD) |
|
In multi-zone configurations, it is advisable to run one OpenShift Container Platform cluster per GCE project to avoid PVs from being created in zones where no node in the current cluster exists. |
|
Any chosen provisioner plug-in also requires configuration for the relevant cloud, host, or third-party provider as per the relevant documentation. |
StorageClass
objects are currently a globally scoped object and must be
created by cluster-admin
or storage-admin
users.
The Cluster Storage Operator might install a default storage class depending on the platform in use. This storage class is owned and controlled by the operator. It cannot be deleted or modified beyond defining annotations and labels. If different behavior is desired, you must define a custom storage class. |
The following sections describe the basic definition for a
StorageClass
object and specific examples for each of the supported plug-in types.
The following resource shows the parameters and default values that you use to configure a storage class. This example uses the AWS ElasticBlockStore (EBS) object definition.
StorageClass
definitionkind: StorageClass (1)
apiVersion: storage.k8s.io/v1 (2)
metadata:
name: gp2 (3)
annotations: (4)
storageclass.kubernetes.io/is-default-class: 'true'
...
provisioner: kubernetes.io/aws-ebs (5)
parameters: (6)
type: gp2
...
1 | (required) The API object type. |
2 | (required) The current apiVersion. |
3 | (required) The name of the storage class. |
4 | (optional) Annotations for the storage class. |
5 | (required) The type of provisioner associated with this storage class. |
6 | (optional) The parameters required for the specific provisioner, this will change from plug-in to plug-in. |
To set a storage class as the cluster-wide default, add the following annotation to your storage class metadata:
storageclass.kubernetes.io/is-default-class: "true"
For example:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
storageclass.kubernetes.io/is-default-class: "true"
...
This enables any persistent volume claim (PVC) that does not specify a specific storage class to automatically be provisioned through the default storage class.
The beta annotation |
To set a storage class description, add the following annotation to your storage class metadata:
kubernetes.io/description: My Storage Class Description
For example:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
kubernetes.io/description: My Storage Class Description
...
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: gold
provisioner: kubernetes.io/cinder
parameters:
type: fast (1)
availability: nova (2)
fsType: ext4 (3)
1 | Volume type created in Cinder. Default is empty. |
2 | Availability Zone. If not specified, volumes are generally round-robined across all active zones where the OpenShift Container Platform cluster has a node. |
3 | File system that is created on dynamically provisioned volumes. This
value is copied to the fsType field of dynamically provisioned
persistent volumes and the file system is created when the volume is
mounted for the first time. The default value is ext4 . |
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: slow
provisioner: kubernetes.io/aws-ebs
parameters:
type: io1 (1)
iopsPerGB: "10" (2)
encrypted: "true" (3)
kmsKeyId: keyvalue (4)
fsType: ext4 (5)
1 | (required) Select from io1 , gp2 , sc1 , st1 . The default is gp2 .
See the
AWS documentation
for valid Amazon Resource Name (ARN) values. |
2 | (optional) Only for io1 volumes. I/O operations per second per GiB. The AWS volume plug-in multiplies this with the size of the requested volume to compute IOPS of the volume. The value cap is 20,000 IOPS, which is the maximum supported by AWS. See the AWS documentation for further details. |
3 | (optional) Denotes whether to encrypt the EBS volume. Valid values
are true or false . |
4 | (optional) The full ARN of the key to use when encrypting the volume.
If none is supplied, but encypted is set to true , then AWS generates a
key. See the
AWS documentation
for a valid ARN value. |
5 | (optional) File system that is created on dynamically provisioned
volumes. This value is copied to the fsType field of dynamically
provisioned persistent volumes and the file system is created when the
volume is mounted for the first time. The default value is ext4 . |
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-premium
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/azure-disk
volumeBindingMode: WaitForFirstConsumer (1)
allowVolumeExpansion: true
parameters:
kind: Managed (2)
storageaccounttype: Premium_LRS (3)
reclaimPolicy: Delete
1 | Using WaitForFirstConsumer is strongly recommended. This provisions the volume while allowing enough storage to schedule the pod on a free worker node from an available zone. |
||
2 | Possible values are Shared (default), Managed , and Dedicated .
|
||
3 | Azure storage account SKU tier. Default is empty. Note that Premium VMs can attach both Standard_LRS and Premium_LRS disks, Standard VMs can only attach Standard_LRS disks, Managed VMs can only attach managed disks, and unmanaged VMs can only attach unmanaged disks.
|
The Azure File storage class uses secrets to store the Azure storage account name and the storage account key that are required to create an Azure Files share. These permissions are created as part of the following procedure.
Define a ClusterRole
object that allows access to create and view secrets:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
# name: system:azure-cloud-provider
name: <persistent-volume-binder-role> (1)
rules:
- apiGroups: ['']
resources: ['secrets']
verbs: ['get','create']
1 | The name of the cluster role to view and create secrets. |
Add the cluster role to the service account:
$ oc adm policy add-cluster-role-to-user <persistent-volume-binder-role>
system:serviceaccount:kube-system:persistent-volume-binder
Create the Azure File StorageClass
object:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: <azure-file> (1)
provisioner: kubernetes.io/azure-file
parameters:
location: eastus (2)
skuName: Standard_LRS (3)
storageAccount: <storage-account> (4)
reclaimPolicy: Delete
volumeBindingMode: Immediate
1 | Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes. |
2 | Location of the Azure storage account, such as eastus . Default is empty, meaning that a new Azure storage account will be created in the OpenShift Container Platform cluster’s location. |
3 | SKU tier of the Azure storage account, such as Standard_LRS . Default is empty, meaning that a new Azure storage account will be created with the Standard_LRS SKU. |
4 | Name of the Azure storage account. If a storage account is provided, then
skuName and location are ignored. If no storage account is provided, then
the storage class searches for any storage account that is associated with the
resource group for any accounts that match the defined skuName and location . |
The following file system features are not supported by the default Azure File storage class:
Symlinks
Hard links
Extended attributes
Sparse files
Named pipes
Additionally, the owner user identifier (UID) of the Azure File mounted directory is different from the process UID of the container. The uid
mount option can be specified in the StorageClass
object to define
a specific user identifier to use for the mounted directory.
The following StorageClass
object demonstrates modifying the user and group identifier, along with enabling symlinks for the mounted directory.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: azure-file
mountOptions:
- uid=1500 (1)
- gid=1500 (2)
- mfsymlinks (3)
provisioner: kubernetes.io/azure-file
parameters:
location: eastus
skuName: Standard_LRS
reclaimPolicy: Delete
volumeBindingMode: Immediate
1 | Specifies the user identifier to use for the mounted directory. |
2 | Specifies the group identifier to use for the mounted directory. |
3 | Enables symlinks. |
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard (1)
replication-type: none
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
reclaimPolicy: Delete
1 | Select either pd-standard or pd-ssd . The default is pd-standard . |
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: slow
provisioner: kubernetes.io/vsphere-volume (1)
parameters:
diskformat: thin (2)
1 | For more information about using VMware vSphere with OpenShift Container Platform, see the VMware vSphere documentation. |
2 | diskformat : thin , zeroedthick and eagerzeroedthick are all
valid disk formats. See vSphere docs for additional details regarding the
disk format types. The default value is thin . |
If you are using AWS, use the following process to change the default
storage class. This process assumes you have two storage classes
defined, gp2
and standard
, and you want to change the default
storage class from gp2
to standard
.
List the storage class:
$ oc get storageclass
NAME TYPE
gp2 (default) kubernetes.io/aws-ebs (1)
standard kubernetes.io/aws-ebs
1 | (default) denotes the default storage class. |
Change the value of the annotation
storageclass.kubernetes.io/is-default-class
to false
for the default
storage class:
$ oc patch storageclass gp2 -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "false"}}}'
Make another storage class the default by adding or modifying the
annotation as storageclass.kubernetes.io/is-default-class=true
.
$ oc patch storageclass standard -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}'
Verify the changes:
$ oc get storageclass
NAME TYPE
gp2 kubernetes.io/aws-ebs
standard (default) kubernetes.io/aws-ebs
Optimizing storage helps to minimize storage use across all resources. By optimizing storage, administrators help ensure that existing storage resources are working in an efficient manner.
Understand your persistent storage options so that you can optimize your OpenShift Container Platform environment.
Storage type | Description | Examples |
---|---|---|
Block |
|
AWS EBS and VMware vSphere support dynamic persistent volume (PV) provisioning natively in OpenShift Container Platform. |
File |
|
RHEL NFS, NetApp NFS [1], and Vendor NFS |
Object |
|
AWS S3 |
NetApp NFS supports dynamic PV provisioning when using the Trident plug-in.
Currently, CNS is not supported in OpenShift Container Platform 4.5. |
The following table summarizes the recommended and configurable storage technologies for the given OpenShift Container Platform cluster application.
Storage type | ROX1 | RWX2 | Registry | Scaled registry | Metrics3 | Logging | Apps |
---|---|---|---|---|---|---|---|
Block |
Yes4 |
No |
Configurable |
Not configurable |
Recommended |
Recommended |
Recommended |
File |
Yes4 |
Yes |
Configurable |
Configurable |
Configurable5 |
Configurable6 |
Recommended |
Object |
Yes |
Yes |
Recommended |
Recommended |
Not configurable |
Not configurable |
Not configurable7 |
1 2 3 Prometheus is the underlying technology used for metrics. 4 This does not apply to physical disk, VM physical disk, VMDK, loopback over NFS, AWS EBS, and Azure Disk. 5 For metrics, using file storage with the 6 For logging, using any shared storage would be an anti-pattern. One volume per elasticsearch is required. 7 Object storage is not consumed through OpenShift Container Platform’s PVs or PVCs. Apps must integrate with the object storage REST API. |
A scaled registry is an OpenShift Container Platform registry where two or more pod replicas are running. |
Testing shows issues with using the NFS server on Red Hat Enterprise Linux (RHEL) as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. |
In a non-scaled/high-availability (HA) OpenShift Container Platform registry cluster deployment:
The storage technology does not have to support RWX access mode.
The storage technology must ensure read-after-write consistency.
The preferred storage technology is object storage followed by block storage.
File storage is not recommended for OpenShift Container Platform registry cluster deployment with production workloads.
In a scaled/HA OpenShift Container Platform registry cluster deployment:
The storage technology must support RWX access mode and must ensure read-after-write consistency.
The preferred storage technology is object storage.
Amazon Simple Storage Service (Amazon S3), Google Cloud Storage (GCS), Microsoft Azure Blob Storage, and OpenStack Swift are supported.
Object storage should be S3 or Swift compliant.
File storage is not recommended for a scaled/HA OpenShift Container Platform registry cluster deployment with production workloads.
For non-cloud platforms, such as vSphere and bare metal installations, the only configurable technology is file storage.
Block storage is not configurable.
In an OpenShift Container Platform hosted metrics cluster deployment:
The preferred storage technology is block storage.
Object storage is not configurable.
It is not recommended to use file storage for a hosted metrics cluster deployment with production workloads. |
In an OpenShift Container Platform hosted logging cluster deployment:
The preferred storage technology is block storage.
File storage is not recommended for a scaled/HA OpenShift Container Platform registry cluster deployment with production workloads.
Object storage is not configurable.
Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. |
Application use cases vary from application to application, as described in the following examples:
Storage technologies that support dynamic PV provisioning have low mount time latencies, and are not tied to nodes to support a healthy cluster.
Application developers are responsible for knowing and understanding the storage requirements for their application, and how it works with the provided storage to ensure that issues do not occur when an application scales or interacts with the storage layer.
OpenShift Container Platform Internal etcd
: For the best etcd
reliability, the lowest consistent latency storage technology is preferable.
It is highly recommended that you use etcd
with storage that handles serial writes (fsync) quickly, such as NVMe or SSD. Ceph, NFS, and spinning disks are not recommended.
Red Hat OpenStack Platform (RHOSP) Cinder: RHOSP Cinder tends to be adept in ROX access mode use cases.
Databases: Databases (RDBMSs, NoSQL DBs, etc.) tend to perform best with dedicated block storage.
Red Hat OpenShift Container Storage is a provider of agnostic persistent storage for OpenShift Container Platform supporting file, block, and object storage, either in-house or in hybrid clouds. As a Red Hat storage solution, Red Hat OpenShift Container Storage is completely integrated with OpenShift Container Platform for deployment, management, and monitoring.
If you are looking for Red Hat OpenShift Container Storage information about… | See the following Red Hat OpenShift Container Storage documentation: |
---|---|
What’s new, known issues, notable bug fixes, and Technology Previews |
|
Supported workloads, layouts, hardware and software requirements, sizing and scaling recommendations |
|
Instructions on preparing to deploy when your environment is not directly connected to the internet |
Preparing to deploy OpenShift Container Storage 4.5 in a disconnected environment |
Instructions on deploying OpenShift Container Storage to use an external Red Hat Ceph Storage cluster |
|
Instructions on deploying OpenShift Container Storage to local storage on bare metal infrastructure |
Deploying OpenShift Container Storage 4.5 using bare metal infrastructure |
Instructions on deploying OpenShift Container Storage on Red Hat OpenShift Container Platform VMWare vSphere clusters |
|
Instructions on deploying OpenShift Container Storage using Amazon Web Services for local or cloud storage |
Deploying OpenShift Container Storage 4.5 using Amazon Web Services |
Instructions on deploying and managing OpenShift Container Storage on existing Red Hat OpenShift Container Platform Google Cloud clusters |
Deploying and managing OpenShift Container Storage 4.5 using Google Cloud |
Instructions on deploying and managing OpenShift Container Storage on existing Red Hat OpenShift Container Platform Azure clusters |
Deploying and managing OpenShift Container Storage 4.5 using Microsoft Azure |
Managing a Red Hat OpenShift Container Storage 4.5 cluster |
|
Monitoring a Red Hat OpenShift Container Storage 4.5 cluster |
|
Resolve issues encountered during operations |
|
Migrating your OpenShift Container Platform cluster from version 3 to version 4 |