$ oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes
After you deploy a hosted cluster on OpenShift Virtualization, you can manage the cluster by completing the following procedures.
You can access the hosted cluster by either getting the kubeconfig
file and kubeadmin
credential directly from resources, or by using the hcp
command line interface to generate a kubeconfig
file.
To access the hosted cluster by getting the kubeconfig
file and credentials directly from resources, you must be familiar with the access secrets for hosted clusters. The hosted cluster (hosting) namespace contains hosted cluster resources and the access secrets. The hosted control plane namespace is where the hosted control plane runs.
The secret name formats are as follows:
kubeconfig
secret: <hosted_cluster_namespace>-<name>-admin-kubeconfig
(clusters-hypershift-demo-admin-kubeconfig)
kubeadmin
password secret: <hosted_cluster_namespace>-<name>-kubeadmin-password
(clusters-hypershift-demo-kubeadmin-password)
The kubeconfig
secret contains a Base64-encoded kubeconfig
field, which you can decode and save into a file to use with the following command:
$ oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes
The kubeadmin
password secret is also Base64-encoded. You can decode it and use the password to log in to the API server or console of the hosted cluster.
To access the hosted cluster by using the hcp
CLI to generate the kubeconfig
file, take the following steps:
Generate the kubeconfig
file by entering the following command:
$ hcp create kubeconfig --namespace <hosted_cluster_namespace> --name <hosted_cluster_name> > <hosted_cluster_name>.kubeconfig
After you save the kubeconfig
file, you can access the hosted cluster by entering the following example command:
$ oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes
If you do not provide any advanced storage configuration, the default storage class is used for the KubeVirt virtual machine (VM) images, the KubeVirt Container Storage Interface (CSI) mapping, and the etcd volumes.
The following table lists the capabilities that the infrastructure must provide to support persistent storage in a hosted cluster:
Infrastructure CSI provider | Hosted cluster CSI provider | Hosted cluster capabilities | Notes |
---|---|---|---|
Any RWX |
|
Basic: RWO |
Recommended |
Any RWX |
Red Hat OpenShift Data Foundation external mode |
Red Hat OpenShift Data Foundation feature set |
|
Any RWX |
Red Hat OpenShift Data Foundation internal mode |
Red Hat OpenShift Data Foundation feature set |
Do not use |
KubeVirt CSI supports mapping a infrastructure storage class that is capable of ReadWriteMany
(RWX) access. You can map the infrastructure storage class to hosted storage class during cluster creation.
To map the infrastructure storage class to the hosted storage class, use the --infra-storage-class-mapping
argument by running the following command:
$ hcp create cluster kubevirt \
--name <hosted_cluster_name> \ (1)
--node-pool-replicas <worker_node_count> \ (2)
--pull-secret <path_to_pull_secret> \ (3)
--memory <memory> \ (4)
--cores <cpu> \ (5)
--infra-storage-class-mapping=<infrastructure_storage_class>/<hosted_storage_class> \ (6)
1 | Specify the name of your hosted cluster, for instance, example . |
2 | Specify the worker count, for example, 2 . |
3 | Specify the path to your pull secret, for example, /user/name/pullsecret . |
4 | Specify a value for memory, for example, 8Gi . |
5 | Specify a value for CPU, for example, 2 . |
6 | Replace <infrastructure_storage_class> with the infrastructure storage class name and <hosted_storage_class> with the hosted cluster storage class name. You can use the --infra-storage-class-mapping argument multiple times within the hcp create cluster command. |
After you create the hosted cluster, the infrastructure storage class is visible within the hosted cluster. When you create a Persistent Volume Claim (PVC) within the hosted cluster that uses one of those storage classes, KubeVirt CSI provisions that volume by using the infrastructure storage class mapping that you configured during cluster creation.
KubeVirt CSI supports mapping only an infrastructure storage class that is capable of RWX access. |
The following table shows how volume and access mode capabilities map to KubeVirt CSI storage classes:
Infrastructure CSI capability | Hosted cluster CSI capability | VM live migration support | Notes |
---|---|---|---|
RWX: |
|
Supported |
Use |
RWO |
RWO |
Not supported |
Lack of live migration support affects the ability to update the underlying infrastructure cluster that hosts the KubeVirt VMs. |
RWO |
RWO |
Not supported |
Lack of live migration support affects the ability to update the underlying infrastructure cluster that hosts the KubeVirt VMs. Use of the infrastructure |
You can expose your infrastructure volume snapshot class to the hosted cluster by using KubeVirt CSI.
To map your volume snapshot class to the hosted cluster, use the --infra-volumesnapshot-class-mapping
argument when creating a hosted cluster. Run the following command:
$ hcp create cluster kubevirt \
--name <hosted_cluster_name> \ (1)
--node-pool-replicas <worker_node_count> \ (2)
--pull-secret <path_to_pull_secret> \ (3)
--memory <memory> \ (4)
--cores <cpu> \ (5)
--infra-storage-class-mapping=<infrastructure_storage_class>/<hosted_storage_class> \ (6)
--infra-volumesnapshot-class-mapping=<infrastructure_volume_snapshot_class>/<hosted_volume_snapshot_class> (7)
1 | Specify the name of your hosted cluster, for instance, example . |
2 | Specify the worker count, for example, 2 . |
3 | Specify the path to your pull secret, for example, /user/name/pullsecret . |
4 | Specify a value for memory, for example, 8Gi . |
5 | Specify a value for CPU, for example, 2 . |
6 | Replace <infrastructure_storage_class> with the storage class present in the infrastructure cluster. Replace <hosted_storage_class> with the storage class present in the hosted cluster. |
7 | Replace <infrastructure_volume_snapshot_class> with the volume snapshot class present in the infrastructure cluster. Replace <hosted_volume_snapshot_class> with the volume snapshot class present in the hosted cluster. |
If you do not use the |
You can map multiple volume snapshot classes to the hosted cluster by assigning them to a specific group. The infrastructure storage class and the volume snapshot class are compatible with each other only if they belong to a same group.
To map multiple volume snapshot classes to the hosted cluster, use the group
option when creating a hosted cluster. Run the following command:
$ hcp create cluster kubevirt \
--name <hosted_cluster_name> \ (1)
--node-pool-replicas <worker_node_count> \ (2)
--pull-secret <path_to_pull_secret> \ (3)
--memory <memory> \ (4)
--cores <cpu> \ (5)
--infra-storage-class-mapping=<infrastructure_storage_class>/<hosted_storage_class>,group=<group_name> \ (6)
--infra-storage-class-mapping=<infrastructure_storage_class>/<hosted_storage_class>,group=<group_name> \
--infra-storage-class-mapping=<infrastructure_storage_class>/<hosted_storage_class>,group=<group_name> \
--infra-volumesnapshot-class-mapping=<infrastructure_volume_snapshot_class>/<hosted_volume_snapshot_class>,group=<group_name> \ (7)
--infra-volumesnapshot-class-mapping=<infrastructure_volume_snapshot_class>/<hosted_volume_snapshot_class>,group=<group_name>
1 | Specify the name of your hosted cluster, for instance, example . |
2 | Specify the worker count, for example, 2 . |
3 | Specify the path to your pull secret, for example, /user/name/pullsecret . |
4 | Specify a value for memory, for example, 8Gi . |
5 | Specify a value for CPU, for example, 2 . |
6 | Replace <infrastructure_storage_class> with the storage class present in the infrastructure cluster. Replace <hosted_storage_class> with the storage class present in the hosted cluster. Replace <group_name> with the group name. For example, infra-storage-class-mygroup/hosted-storage-class-mygroup,group=mygroup and infra-storage-class-mymap/hosted-storage-class-mymap,group=mymap . |
7 | Replace <infrastructure_volume_snapshot_class> with the volume snapshot class present in the infrastructure cluster. Replace <hosted_volume_snapshot_class> with the volume snapshot class present in the hosted cluster. For example, infra-vol-snap-mygroup/hosted-vol-snap-mygroup,group=mygroup and infra-vol-snap-mymap/hosted-vol-snap-mymap,group=mymap . |
At cluster creation time, you can configure the storage class that is used to host the KubeVirt VM root volumes by using the --root-volume-storage-class
argument.
To set a custom storage class and volume size for KubeVirt VMs, run the following command:
$ hcp create cluster kubevirt \
--name <hosted_cluster_name> \ (1)
--node-pool-replicas <worker_node_count> \ (2)
--pull-secret <path_to_pull_secret> \ (3)
--memory <memory> \ (4)
--cores <cpu> \ (5)
--root-volume-storage-class <root_volume_storage_class> \ (6)
--root-volume-size <volume_size> (7)
1 | Specify the name of your hosted cluster, for instance, example . |
2 | Specify the worker count, for example, 2 . |
3 | Specify the path to your pull secret, for example, /user/name/pullsecret . |
4 | Specify a value for memory, for example, 8Gi . |
5 | Specify a value for CPU, for example, 2 . |
6 | Specify a name of the storage class to host the KubeVirt VM root volumes, for example, ocs-storagecluster-ceph-rbd . |
7 | Specify the volume size, for example, 64 . |
As a result, you get a hosted cluster created with VMs hosted on PVCs.
You can use KubeVirt VM image caching to optimize both cluster startup time and storage usage. KubeVirt VM image caching supports the use of a storage class that is capable of smart cloning and the ReadWriteMany
access mode. For more information about smart cloning, see Cloning a data volume using smart-cloning.
Image caching works as follows:
The VM image is imported to a PVC that is associated with the hosted cluster.
A unique clone of that PVC is created for every KubeVirt VM that is added as a worker node to the cluster.
Image caching reduces VM startup time by requiring only a single image import. It can further reduce overall cluster storage usage when the storage class supports copy-on-write cloning.
To enable image caching, during cluster creation, use the --root-volume-cache-strategy=PVC
argument by running the following command:
$ hcp create cluster kubevirt \
--name <hosted_cluster_name> \ (1)
--node-pool-replicas <worker_node_count> \ (2)
--pull-secret <path_to_pull_secret> \ (3)
--memory <memory> \ (4)
--cores <cpu> \ (5)
--root-volume-cache-strategy=PVC (6)
1 | Specify the name of your hosted cluster, for instance, example . |
2 | Specify the worker count, for example, 2 . |
3 | Specify the path to your pull secret, for example, /user/name/pullsecret . |
4 | Specify a value for memory, for example, 8Gi . |
5 | Specify a value for CPU, for example, 2 . |
6 | Specify a strategy for image caching, for example, PVC . |
At cluster creation time, you can configure the storage class that is used to host etcd data by using the --etcd-storage-class
argument.
To configure a storage class for etcd, run the following command:
$ hcp create cluster kubevirt \
--name <hosted_cluster_name> \ (1)
--node-pool-replicas <worker_node_count> \ (2)
--pull-secret <path_to_pull_secret> \ (3)
--memory <memory> \ (4)
--cores <cpu> \ (5)
--etcd-storage-class=<etcd_storage_class_name> (6)
1 | Specify the name of your hosted cluster, for instance, example . |
2 | Specify the worker count, for example, 2 . |
3 | Specify the path to your pull secret, for example, /user/name/pullsecret . |
4 | Specify a value for memory, for example, 8Gi . |
5 | Specify a value for CPU, for example, 2 . |
6 | Specify the etcd storage class name, for example, lvm-storageclass . If you do not provide an --etcd-storage-class argument, the default storage class is used. |
You can attach one or more NVIDIA graphics processing unit (GPU) devices to node pools by using the hcp
command-line interface (CLI) in a hosted cluster on OpenShift Virtualization.
Attaching NVIDIA GPU devices to node pools is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
You have exposed the NVIDIA GPU device as a resource on the node where the GPU device resides. For more information, see NVIDIA GPU Operator with OpenShift Virtualization.
You have exposed the NVIDIA GPU device as an extended resource on the node to assign it to node pools.
You can attach the GPU device to node pools during cluster creation by running the following command:
$ hcp create cluster kubevirt \
--name <hosted_cluster_name> \(1)
--node-pool-replicas <worker_node_count> \(2)
--pull-secret <path_to_pull_secret> \(3)
--memory <memory> \(4)
--cores <cpu> \(5)
--host-device-name="<gpu_device_name>,count:<value>" (6)
1 | Specify the name of your hosted cluster, for instance, example . |
2 | Specify the worker count, for example, 3 . |
3 | Specify the path to your pull secret, for example, /user/name/pullsecret . |
4 | Specify a value for memory, for example, 16Gi . |
5 | Specify a value for CPU, for example, 2 . |
6 | Specify the GPU device name and the count, for example, --host-device-name="nvidia-a100,count:2" . The --host-device-name argument takes the name of the GPU device from the infrastructure node and an optional count that represents the number of GPU devices you want to attach to each virtual machine (VM) in node pools. The default count is 1 . For example, if you attach 2 GPU devices to 3 node pool replicas, all 3 VMs in the node pool are attached to the 2 GPU devices. |
You can use the |
You can attach one or more NVIDIA graphics processing unit (GPU) devices to node pools by configuring the nodepool.spec.platform.kubevirt.hostDevices
field in the NodePool
resource.
Attaching NVIDIA GPU devices to node pools is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
Attach one or more GPU devices to node pools:
To attach a single GPU device, configure the NodePool
resource by using the following example configuration:
apiVersion: hypershift.openshift.io/v1beta1
kind: NodePool
metadata:
name: <hosted_cluster_name> (1)
namespace: <hosted_cluster_namespace> (2)
spec:
arch: amd64
clusterName: <hosted_cluster_name>
management:
autoRepair: false
upgradeType: Replace
nodeDrainTimeout: 0s
nodeVolumeDetachTimeout: 0s
platform:
kubevirt:
attachDefaultNetwork: true
compute:
cores: <cpu> (3)
memory: <memory> (4)
hostDevices: (5)
- count: <count> (6)
deviceName: <gpu_device_name> (7)
networkInterfaceMultiqueue: Enable
rootVolume:
persistent:
size: 32Gi
type: Persistent
type: KubeVirt
replicas: <worker_node_count> (8)
1 | Specify the name of your hosted cluster, for instance, example . |
2 | Specify the name of the hosted cluster namespace, for example, clusters . |
3 | Specify a value for CPU, for example, 2 . |
4 | Specify a value for memory, for example, 16Gi . |
5 | The hostDevices field defines a list of different types of GPU devices that you can attach to node pools. |
6 | Specify the number of GPU devices you want to attach to each virtual machine (VM) in node pools. For example, if you attach 2 GPU devices to 3 node pool replicas, all 3 VMs in the node pool are attached to the 2 GPU devices. The default count is 1 . |
7 | Specify the GPU device name, for example,nvidia-a100 . |
8 | Specify the worker count, for example, 3 . |
To attach multiple GPU devices, configure the NodePool
resource by using the following example configuration:
apiVersion: hypershift.openshift.io/v1beta1
kind: NodePool
metadata:
name: <hosted_cluster_name>
namespace: <hosted_cluster_namespace>
spec:
arch: amd64
clusterName: <hosted_cluster_name>
management:
autoRepair: false
upgradeType: Replace
nodeDrainTimeout: 0s
nodeVolumeDetachTimeout: 0s
platform:
kubevirt:
attachDefaultNetwork: true
compute:
cores: <cpu>
memory: <memory>
hostDevices:
- count: <count>
deviceName: <gpu_device_name>
- count: <count>
deviceName: <gpu_device_name>
- count: <count>
deviceName: <gpu_device_name>
- count: <count>
deviceName: <gpu_device_name>
networkInterfaceMultiqueue: Enable
rootVolume:
persistent:
size: 32Gi
type: Persistent
type: KubeVirt
replicas: <worker_node_count>