×

Sample YAML for configuring Microsoft Azure clusters

The following example YAML snippets show provider specification and failure domain configurations for an Azure cluster.

Sample Azure provider specification

When you create a control plane machine set for an existing cluster, the provider specification must match the providerSpec configuration in the control plane Machine CR that is created by the installation program. You can omit any field that is set in the failure domain section of the CR.

In the following example, <cluster_id> is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:

$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
Sample Azure providerSpec values
apiVersion: machine.openshift.io/v1
kind: ControlPlaneMachineSet
metadata:
  name: cluster
  namespace: openshift-machine-api
spec:
# ...
  template:
# ...
      spec:
        providerSpec:
          value:
            acceleratedNetworking: true
            apiVersion: machine.openshift.io/v1beta1
            credentialsSecret:
              name: azure-cloud-credentials (1)
              namespace: openshift-machine-api
            diagnostics: {}
            image: (2)
              offer: ""
              publisher: ""
              resourceID: /resourceGroups/<cluster_id>-rg/providers/Microsoft.Compute/galleries/gallery_<cluster_id>/images/<cluster_id>-gen2/versions/412.86.20220930 (3)
              sku: ""
              version: ""
            internalLoadBalancer: <cluster_id>-internal (4)
            kind: AzureMachineProviderSpec (5)
            location: <region> (6)
            managedIdentity: <cluster_id>-identity
            metadata:
              creationTimestamp: null
              name: <cluster_id>
            networkResourceGroup: <cluster_id>-rg
            osDisk: (7)
              diskSettings: {}
              diskSizeGB: 1024
              managedDisk:
                storageAccountType: Premium_LRS
              osType: Linux
            publicIP: false
            publicLoadBalancer: <cluster_id> (8)
            resourceGroup: <cluster_id>-rg
            subnet: <cluster_id>-master-subnet (9)
            userDataSecret:
              name: master-user-data (10)
            vmSize: Standard_D8s_v3
            vnet: <cluster_id>-vnet
            zone: "1" (11)
1 Specifies the secret name for the cluster. Do not change this value.
2 Specifies the image details for your control plane machine set.
3 Specifies an image that is compatible with your instance type. The Hyper-V generation V2 images created by the installation program have a -gen2 suffix, while V1 images have the same name without the suffix.
4 Specifies the internal load balancer for the control plane. This field might not be preconfigured but is required in both the ControlPlaneMachineSet and control plane Machine CRs.
5 Specifies the cloud provider platform type. Do not change this value.
6 Specifies the region to place control plane machines on.
7 Specifies the disk configuration for the control plane.
8 Specifies the public load balancer for the control plane.

You can omit the publicLoadBalancer parameter on private OpenShift Container Platform clusters that have user-defined outbound routing.

9 Specifies the subnet for the control plane.
10 Specifies the control plane user data secret. Do not change this value.
11 Specifies the zone configuration for clusters that use a single zone for all failure domains.

If the cluster is configured to use a different zone for each failure domain, this parameter is configured in the failure domain. If you specify this value in the provider specification when using different zones for each failure domain, the Control Plane Machine Set Operator ignores it.

Sample Azure failure domain configuration

The control plane machine set concept of a failure domain is analogous to existing Azure concept of an Azure availability zone. The ControlPlaneMachineSet CR spreads control plane machines across multiple failure domains when possible.

When configuring Azure failure domains in the control plane machine set, you must specify the availability zone name. An Azure cluster uses a single subnet that spans multiple zones.

Sample Azure failure domain values
apiVersion: machine.openshift.io/v1
kind: ControlPlaneMachineSet
metadata:
  name: cluster
  namespace: openshift-machine-api
spec:
# ...
  template:
# ...
    machines_v1beta1_machine_openshift_io:
      failureDomains:
        azure:
        - zone: "1" (1)
        - zone: "2"
        - zone: "3"
        platform: Azure (2)
# ...
1 Each instance of zone specifies an Azure availability zone for a failure domain.

If the cluster is configured to use a single zone for all failure domains, the zone parameter is configured in the provider specification instead of in the failure domain configuration.

2 Specifies the cloud provider platform name. Do not change this value.

Enabling Microsoft Azure features for control plane machines

You can enable features by updating values in the control plane machine set.

Restricting the API server to private

After you deploy a cluster to Microsoft Azure, you can reconfigure the API server to use only the private zone.

Prerequisites
  • Install the OpenShift CLI (oc).

  • Have access to the web console as a user with admin privileges.

Procedure
  1. In the web portal or console for your cloud provider, take the following actions:

    1. Locate and delete the appropriate load balancer component:

      • For Azure, delete the api-internal-v4 rule for the public load balancer.

    2. For Azure, configure the Ingress Controller endpoint publishing scope to Internal. For more information, see "Configuring the Ingress Controller endpoint publishing scope to Internal".

    3. For the Azure public load balancer, if you configure the Ingress Controller endpoint publishing scope to Internal and there are no existing inbound rules in the public load balancer, you must create an outbound rule explicitly to provide outbound traffic for the backend address pool. For more information, see the Microsoft Azure documentation about adding outbound rules.

    4. Delete the api.$clustername DNS entry in the public zone.

Using the Azure Marketplace offering

You can create a machine set running on Azure that deploys machines that use the Azure Marketplace offering. To use this offering, you must first obtain the Azure Marketplace image. When obtaining your image, consider the following:

  • While the images are the same, the Azure Marketplace publisher is different depending on your region. If you are located in North America, specify redhat as the publisher. If you are located in EMEA, specify redhat-limited as the publisher.

  • The offer includes a rh-ocp-worker SKU and a rh-ocp-worker-gen1 SKU. The rh-ocp-worker SKU represents a Hyper-V generation version 2 VM image. The default instance types used in OpenShift Container Platform are version 2 compatible. If you plan to use an instance type that is only version 1 compatible, use the image associated with the rh-ocp-worker-gen1 SKU. The rh-ocp-worker-gen1 SKU represents a Hyper-V version 1 VM image.

Installing images with the Azure marketplace is not supported on clusters with 64-bit ARM instances.

Prerequisites
  • You have installed the Azure CLI client (az).

  • Your Azure account is entitled for the offer and you have logged into this account with the Azure CLI client.

Procedure
  1. Display all of the available OpenShift Container Platform images by running one of the following commands:

    • North America:

      $  az vm image list --all --offer rh-ocp-worker --publisher redhat -o table
      Example output
      Offer          Publisher       Sku                 Urn                                                             Version
      -------------  --------------  ------------------  --------------------------------------------------------------  -----------------
      rh-ocp-worker  RedHat          rh-ocp-worker       RedHat:rh-ocp-worker:rh-ocp-worker:4.15.2024072409              4.15.2024072409
      rh-ocp-worker  RedHat          rh-ocp-worker-gen1  RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409         4.15.2024072409
    • EMEA:

      $  az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table
      Example output
      Offer          Publisher       Sku                 Urn                                                                     Version
      -------------  --------------  ------------------  --------------------------------------------------------------          -----------------
      rh-ocp-worker  redhat-limited  rh-ocp-worker       redhat-limited:rh-ocp-worker:rh-ocp-worker:4.15.2024072409              4.15.2024072409
      rh-ocp-worker  redhat-limited  rh-ocp-worker-gen1  redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409         4.15.2024072409

    Use the latest image that is available for compute and control plane nodes. If required, your VMs are automatically upgraded as part of the installation process.

  2. Inspect the image for your offer by running one of the following commands:

    • North America:

      $ az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>
    • EMEA:

      $ az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>
  3. Review the terms of the offer by running one of the following commands:

    • North America:

      $ az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>
    • EMEA:

      $ az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>
  4. Accept the terms of the offering by running one of the following commands:

    • North America:

      $ az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>
    • EMEA:

      $ az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>
  5. Record the image details of your offer, specifically the values for publisher, offer, sku, and version.

  6. Add the following parameters to the providerSpec section of your machine set YAML file using the image details for your offer:

    Sample providerSpec image values for Azure Marketplace machines
    providerSpec:
      value:
        image:
          offer: rh-ocp-worker
          publisher: redhat
          resourceID: ""
          sku: rh-ocp-worker
          type: MarketplaceWithPlan
          version: 413.92.2023101700

Enabling Azure boot diagnostics

You can enable boot diagnostics on Azure machines that your machine set creates.

Prerequisites
  • Have an existing Microsoft Azure cluster.

Procedure
  • Add the diagnostics configuration that is applicable to your storage type to the providerSpec field in your machine set YAML file:

    • For an Azure Managed storage account:

      providerSpec:
        diagnostics:
          boot:
            storageAccountType: AzureManaged (1)
      1 Specifies an Azure Managed storage account.
    • For an Azure Unmanaged storage account:

      providerSpec:
        diagnostics:
          boot:
            storageAccountType: CustomerManaged (1)
            customerManaged:
              storageAccountURI: https://<storage-account>.blob.core.windows.net (2)
      1 Specifies an Azure Unmanaged storage account.
      2 Replace <storage-account> with the name of your storage account.

      Only the Azure Blob Storage data service is supported.

Verification
  • On the Microsoft Azure portal, review the Boot diagnostics page for a machine deployed by the machine set, and verify that you can see the serial logs for the machine.

Machine sets that deploy machines with ultra disks as data disks

You can create a machine set running on Azure that deploys machines with ultra disks. Ultra disks are high-performance storage that are intended for use with the most demanding data workloads.

Creating machines with ultra disks by using machine sets

You can deploy machines with ultra disks on Azure by editing your machine set YAML file.

Prerequisites
  • Have an existing Microsoft Azure cluster.

Procedure
  1. Create a custom secret in the openshift-machine-api namespace using the master data secret by running the following command:

    $ oc -n openshift-machine-api \
    get secret <role>-user-data \ (1)
    --template='{{index .data.userData | base64decode}}' | jq > userData.txt (2)
    1 Replace <role> with master.
    2 Specify userData.txt as the name of the new custom secret.
  2. In a text editor, open the userData.txt file and locate the final } character in the file.

    1. On the immediately preceding line, add a ,.

    2. Create a new line after the , and add the following configuration details:

      "storage": {
        "disks": [ (1)
          {
            "device": "/dev/disk/azure/scsi1/lun0", (2)
            "partitions": [ (3)
              {
                "label": "lun0p1", (4)
                "sizeMiB": 1024, (5)
                "startMiB": 0
              }
            ]
          }
        ],
        "filesystems": [ (6)
          {
            "device": "/dev/disk/by-partlabel/lun0p1",
            "format": "xfs",
            "path": "/var/lib/lun0p1"
          }
        ]
      },
      "systemd": {
        "units": [ (7)
          {
            "contents": "[Unit]\nBefore=local-fs.target\n[Mount]\nWhere=/var/lib/lun0p1\nWhat=/dev/disk/by-partlabel/lun0p1\nOptions=defaults,pquota\n[Install]\nWantedBy=local-fs.target\n", (8)
            "enabled": true,
            "name": "var-lib-lun0p1.mount"
          }
        ]
      }
      1 The configuration details for the disk that you want to attach to a node as an ultra disk.
      2 Specify the lun value that is defined in the dataDisks stanza of the machine set you are using. For example, if the machine set contains lun: 0, specify lun0. You can initialize multiple data disks by specifying multiple "disks" entries in this configuration file. If you specify multiple "disks" entries, ensure that the lun value for each matches the value in the machine set.
      3 The configuration details for a new partition on the disk.
      4 Specify a label for the partition. You might find it helpful to use hierarchical names, such as lun0p1 for the first partition of lun0.
      5 Specify the total size in MiB of the partition.
      6 Specify the filesystem to use when formatting a partition. Use the partition label to specify the partition.
      7 Specify a systemd unit to mount the partition at boot. Use the partition label to specify the partition. You can create multiple partitions by specifying multiple "partitions" entries in this configuration file. If you specify multiple "partitions" entries, you must specify a systemd unit for each.
      8 For Where, specify the value of storage.filesystems.path. For What, specify the value of storage.filesystems.device.
  3. Extract the disabling template value to a file called disableTemplating.txt by running the following command:

    $ oc -n openshift-machine-api get secret <role>-user-data \ (1)
    --template='{{index .data.disableTemplating | base64decode}}' | jq > disableTemplating.txt
    1 Replace <role> with master.
  4. Combine the userData.txt file and disableTemplating.txt file to create a data secret file by running the following command:

    $ oc -n openshift-machine-api create secret generic <role>-user-data-x5 \ (1)
    --from-file=userData=userData.txt \
    --from-file=disableTemplating=disableTemplating.txt
    1 For <role>-user-data-x5, specify the name of the secret. Replace <role> with master.
  5. Edit your control plane machine set CR by running the following command:

    $ oc --namespace openshift-machine-api edit controlplanemachineset.machine.openshift.io cluster
  6. Add the following lines in the positions indicated:

    apiVersion: machine.openshift.io/v1beta1
    kind: ControlPlaneMachineSet
    spec:
      template:
        spec:
          metadata:
            labels:
              disk: ultrassd (1)
          providerSpec:
            value:
              ultraSSDCapability: Enabled (2)
              dataDisks: (2)
              - nameSuffix: ultrassd
                lun: 0
                diskSizeGB: 4
                deletionPolicy: Delete
                cachingType: None
                managedDisk:
                  storageAccountType: UltraSSD_LRS
              userDataSecret:
                name: <role>-user-data-x5 (3)
    1 Specify a label to use to select a node that is created by this machine set. This procedure uses disk.ultrassd for this value.
    2 These lines enable the use of ultra disks. For dataDisks, include the entire stanza.
    3 Specify the user data secret created earlier. Replace <role> with master.
  7. Save your changes.

    • For clusters that use the default RollingUpdate update strategy, the Operator automatically propagates the changes to your control plane configuration.

    • For clusters that are configured to use the OnDelete update strategy, you must replace your control plane machines manually.

Verification
  1. Validate that the machines are created by running the following command:

    $ oc get machines

    The machines should be in the Running state.

  2. For a machine that is running and has a node attached, validate the partition by running the following command:

    $ oc debug node/<node-name> -- chroot /host lsblk

    In this command, oc debug node/<node-name> starts a debugging shell on the node <node-name> and passes a command with --. The passed command chroot /host provides access to the underlying host OS binaries, and lsblk shows the block devices that are attached to the host OS machine.

Next steps
  • To use an ultra disk on the control plane, reconfigure your workload to use the control plane’s ultra disk mount point.

Troubleshooting resources for machine sets that enable ultra disks

Use the information in this section to understand and recover from issues you might encounter.

Incorrect ultra disk configuration

If an incorrect configuration of the ultraSSDCapability parameter is specified in the machine set, the machine provisioning fails.

For example, if the ultraSSDCapability parameter is set to Disabled, but an ultra disk is specified in the dataDisks parameter, the following error message appears:

StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set.
  • To resolve this issue, verify that your machine set configuration is correct.

Unsupported disk parameters

If a region, availability zone, or instance size that is not compatible with ultra disks is specified in the machine set, the machine provisioning fails. Check the logs for the following error message:

failed to create vm <machine_name>: failure sending request for machine <machine_name>: cannot create vm: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="BadRequest" Message="Storage Account type 'UltraSSD_LRS' is not supported <more_information_about_why>."
  • To resolve this issue, verify that you are using this feature in a supported environment and that your machine set configuration is correct.

Unable to delete disks

If the deletion of ultra disks as data disks is not working as expected, the machines are deleted and the data disks are orphaned. You must delete the orphaned disks manually if desired.

Enabling customer-managed encryption keys for a machine set

You can supply an encryption key to Azure to encrypt data on managed disks at rest. You can enable server-side encryption with customer-managed keys by using the Machine API.

An Azure Key Vault, a disk encryption set, and an encryption key are required to use a customer-managed key. The disk encryption set must be in a resource group where the Cloud Credential Operator (CCO) has granted permissions. If not, an additional reader role is required to be granted on the disk encryption set.

Procedure
  • Configure the disk encryption set under the providerSpec field in your machine set YAML file. For example:

    providerSpec:
      value:
        osDisk:
          diskSizeGB: 128
          managedDisk:
            diskEncryptionSet:
              id: /subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Compute/diskEncryptionSets/<disk_encryption_set_name>
            storageAccountType: Premium_LRS

Configuring trusted launch for Azure virtual machines by using machine sets

Using trusted launch for Azure virtual machines is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

OpenShift Container Platform 4.17 supports trusted launch for Azure virtual machines (VMs). By editing the machine set YAML file, you can configure the trusted launch options that a machine set uses for machines that it deploys. For example, you can configure these machines to use UEFI security features such as Secure Boot or a dedicated virtual Trusted Platform Module (vTPM) instance.

Some feature combinations result in an invalid configuration.

Table 1. UEFI feature combination compatibility
Secure Boot[1] vTPM[2] Valid configuration

Enabled

Enabled

Yes

Enabled

Disabled

Yes

Enabled

Omitted

Yes

Disabled

Enabled

Yes

Omitted

Enabled

Yes

Disabled

Disabled

No

Omitted

Disabled

No

Omitted

Omitted

No

  1. Using the secureBoot field.

  2. Using the virtualizedTrustedPlatformModule field.

For more information about related features and functionality, see the Microsoft Azure documentation about Trusted launch for Azure virtual machines.

Procedure
  1. In a text editor, open the YAML file for an existing machine set or create a new one.

  2. Edit the following section under the providerSpec field to provide a valid configuration:

    Sample valid configuration with UEFI Secure Boot and vTPM enabled
    apiVersion: machine.openshift.io/v1
    kind: ControlPlaneMachineSet
    # ...
    spec:
      template:
        machines_v1beta1_machine_openshift_io:
          spec:
            providerSpec:
              value:
                securityProfile:
                  settings:
                    securityType: TrustedLaunch (1)
                    trustedLaunch:
                      uefiSettings: (2)
                        secureBoot: Enabled (3)
                        virtualizedTrustedPlatformModule: Enabled (4)
    # ...
    1 Enables the use of trusted launch for Azure virtual machines. This value is required for all valid configurations.
    2 Specifies which UEFI security features to use. This section is required for all valid configurations.
    3 Enables UEFI Secure Boot.
    4 Enables the use of a vTPM.
Verification
  • On the Azure portal, review the details for a machine deployed by the machine set and verify that the trusted launch options match the values that you configured.

Configuring Azure confidential virtual machines by using machine sets

Using Azure confidential virtual machines is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

OpenShift Container Platform 4.17 supports Azure confidential virtual machines (VMs).

Confidential VMs are currently not supported on 64-bit ARM architectures.

By editing the machine set YAML file, you can configure the confidential VM options that a machine set uses for machines that it deploys. For example, you can configure these machines to use UEFI security features such as Secure Boot or a dedicated virtual Trusted Platform Module (vTPM) instance.

Not all instance types support confidential VMs. Do not change the instance type for a control plane machine set that is configured to use confidential VMs to a type that is incompatible. Using an incompatible instance type can cause your cluster to become unstable.

For more information about related features and functionality, see the Microsoft Azure documentation about Confidential virtual machines.

Procedure
  1. In a text editor, open the YAML file for an existing machine set or create a new one.

  2. Edit the following section under the providerSpec field:

    Sample configuration
    apiVersion: machine.openshift.io/v1
    kind: ControlPlaneMachineSet
    # ...
    spec:
      template:
        spec:
          providerSpec:
            value:
              osDisk:
                # ...
                managedDisk:
                  securityProfile: (1)
                    securityEncryptionType: VMGuestStateOnly (2)
                # ...
              securityProfile: (3)
                settings:
                    securityType: ConfidentialVM (4)
                    confidentialVM:
                      uefiSettings: (5)
                        secureBoot: Disabled (6)
                        virtualizedTrustedPlatformModule: Enabled (7)
              vmSize: Standard_DC16ads_v5 (8)
    # ...
    1 Specifies security profile settings for the managed disk when using a confidential VM.
    2 Enables encryption of the Azure VM Guest State (VMGS) blob. This setting requires the use of vTPM.
    3 Specifies security profile settings for the confidential VM.
    4 Enables the use of confidential VMs. This value is required for all valid configurations.
    5 Specifies which UEFI security features to use. This section is required for all valid configurations.
    6 Disables UEFI Secure Boot.
    7 Enables the use of a vTPM.
    8 Specifies an instance type that supports confidential VMs.
Verification
  • On the Azure portal, review the details for a machine deployed by the machine set and verify that the confidential VM options match the values that you configured.

Accelerated Networking for Microsoft Azure VMs

Accelerated Networking uses single root I/O virtualization (SR-IOV) to provide Microsoft Azure VMs with a more direct path to the switch. This enhances network performance. This feature can be enabled after installation.

Limitations

Consider the following limitations when deciding whether to use Accelerated Networking:

  • Accelerated Networking is only supported on clusters where the Machine API is operational.

  • Accelerated Networking requires an Azure VM size that includes at least four vCPUs. To satisfy this requirement, you can change the value of vmSize in your machine set. For information about Azure VM sizes, see Microsoft Azure documentation.

Configuring Capacity Reservation by using machine sets

OpenShift Container Platform version 4.17 and later supports on-demand Capacity Reservation with Capacity Reservation groups on Microsoft Azure clusters.

You can configure a machine set to deploy machines on any available resources that match the parameters of a capacity request that you define. These parameters specify the VM size, region, and number of instances that you want to reserve. If your Azure subscription quota can accommodate the capacity request, the deployment succeeds.

For more information, including limitations and suggested use cases for this Azure instance type, see the Microsoft Azure documentation about On-demand Capacity Reservation.

You cannot change an existing Capacity Reservation configuration for a machine set. To use a different Capacity Reservation group, you must replace the machine set and the machines that the previous machine set deployed.

Prerequisites
  • You have access to the cluster with cluster-admin privileges.

  • You installed the OpenShift CLI (oc).

  • You created a Capacity Reservation group.

    For more information, see the Microsoft Azure documentation Create a Capacity Reservation.

Procedure
  1. In a text editor, open the YAML file for an existing machine set or create a new one.

  2. Edit the following section under the providerSpec field:

    Sample configuration
    apiVersion: machine.openshift.io/v1
    kind: ControlPlaneMachineSet
    # ...
    spec:
      template:
        machines_v1beta1_machine_openshift_io:
          spec:
            providerSpec:
              value:
                capacityReservationGroupID: <capacity_reservation_group> (1)
    # ...
    1 Specify the ID of the Capacity Reservation group that you want the machine set to deploy machines on.
Verification
  • To verify machine deployment, list the machines that the machine set created by running the following command:

    $ oc get machine \
      -n openshift-machine-api \
      -l machine.openshift.io/cluster-api-machine-role=master

    In the output, verify that the characteristics of the listed machines match the parameters of your Capacity Reservation.

Enabling Accelerated Networking on an existing Microsoft Azure cluster

You can enable Accelerated Networking on Azure by adding acceleratedNetworking to your machine set YAML file.

Prerequisites
  • Have an existing Microsoft Azure cluster where the Machine API is operational.

Procedure
  • Add the following to the providerSpec field:

    providerSpec:
      value:
        acceleratedNetworking: true (1)
        vmSize: <azure-vm-size> (2)
    1 This line enables Accelerated Networking.
    2 Specify an Azure VM size that includes at least four vCPUs. For information about VM sizes, see Microsoft Azure documentation.
Verification
  • On the Microsoft Azure portal, review the Networking settings page for a machine provisioned by the machine set, and verify that the Accelerated networking field is set to Enabled.