To create a Google Cloud Platform (GCP) cluster with multi-architecture compute machines, you must first create a single-architecture GCP installer-provisioned cluster with the multi-architecture installer binary. For more information on AWS installations, refer to Installing a cluster on GCP with customizations. You can then add ARM64 compute machines sets to your GCP cluster.

Secure booting is currently not supported on ARM64 machines for GCP

Verifying cluster compatibility

Before you can start adding compute nodes of different architectures to your cluster, you must verify that your cluster is multi-architecture compatible.

  • You installed the OpenShift CLI (oc)

  • You can check that your cluster uses the architecture payload by running the following command:

    $ oc adm release info -o jsonpath="{ .metadata.metadata}"
  1. If you see the following output, then your cluster is using the multi-architecture payload:

     "release.openshift.io/architecture": "multi",
     "url": "https://access.redhat.com/errata/<errata_version>"

    You can then begin adding multi-arch compute nodes to your cluster.

  2. If you see the following output, then your cluster is not using the multi-architecture payload:

     "url": "https://access.redhat.com/errata/<errata_version>"

    To migrate your cluster so the cluster supports multi-architecture compute machines, follow the procedure in Migrating to a cluster with multi-architecture compute machines.

Adding an ARM64 compute machine set to your GCP cluster

To configure a cluster with multi-architecture compute machines, you must create a GCP ARM64 compute machine set. This adds ARM64 compute nodes to your cluster.

  • You installed the OpenShift CLI (oc).

  • You used the installation program to create an AMD64 single-architecture AWS cluster with the multi-architecture installer binary.

  • Create and modify a compute machine set, this controls the ARM64 compute nodes in your cluster:

    $ oc create -f gcp-arm64-machine-set-0.yaml
    Sample GCP YAML compute machine set to deploy an ARM64 compute node
    apiVersion: machine.openshift.io/v1beta1
    kind: MachineSet
        machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
      name: <infrastructure_id>-w-a
      namespace: openshift-machine-api
      replicas: 1
          machine.openshift.io/cluster-api-cluster: <infrastructure_id>
          machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a
          creationTimestamp: null
            machine.openshift.io/cluster-api-cluster: <infrastructure_id>
            machine.openshift.io/cluster-api-machine-role: <role> (2)
            machine.openshift.io/cluster-api-machine-type: <role>
            machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a
              node-role.kubernetes.io/<role>: ""
              apiVersion: gcpprovider.openshift.io/v1beta1
              canIPForward: false
                name: gcp-cloud-credentials
              deletionProtection: false
              - autoDelete: true
                boot: true
                image: <path_to_image> (3)
                labels: null
                sizeGb: 128
                type: pd-ssd
              gcpMetadata: (4)
              - key: <custom_metadata_key>
                value: <custom_metadata_value>
              kind: GCPMachineProviderSpec
              machineType: n1-standard-4 (5)
                creationTimestamp: null
              - network: <infrastructure_id>-network
                subnetwork: <infrastructure_id>-worker-subnet
              projectID: <project_name> (6)
              region: us-central1 (7)
              - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com
                - https://www.googleapis.com/auth/cloud-platform
                - <infrastructure_id>-worker
                name: worker-user-data
              zone: us-central1-a
    1 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. You can obtain the infrastructure ID by running the following command:
    $ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
    2 Specify the role node label to add.
    3 Specify the path to the image that is used in current compute machine sets. You need the project and image name for your path to image.

    To access the project and image name, run the following command:

    $ oc get configmap/coreos-bootimages \
      -n openshift-machine-config-operator \
      -o jsonpath='{.data.stream}' | jq \
      -r '.architectures.aarch64.images.gcp'
    Example output
      "gcp": {
        "release": "415.92.202309142014-0",
        "project": "rhcos-cloud",
        "name": "rhcos-415-92-202309142014-0-gcp-aarch64"

    Use the project and name parameters from the output to create the path to image field in your machine set. The path to the image should follow the following format:

    $ projects/<project>/global/images/<image_name>
    4 Optional: Specify custom metadata in the form of a key:value pair. For example use cases, see the GCP documentation for setting custom metadata.
    5 Specify an ARM64 supported machine type. For more information, refer to Tested instance types for GCP on 64-bit ARM infrastructures in "Additional resources".
    6 Specify the name of the GCP project that you use for your cluster.
    7 Specify the region, for example, us-central1. Ensure that the zone you select offers 64-bit ARM machines.
  1. View the list of compute machine sets by entering the following command:

    $ oc get machineset -n openshift-machine-api

    You can then see your created ARM64 machine set.

    Example output
    NAME                                                DESIRED  CURRENT  READY  AVAILABLE  AGE
    <infrastructure_id>-gcp-arm64-machine-set-0                   2        2      2          2  10m
  2. You can check that the nodes are ready and scheduable with the following command:

    $ oc get nodes