×

Workload Identity Federation Overview

Workload Identity Federation (WIF) is a Google Cloud Platform (GCP) Identity and Access Management (IAM) feature that provides third parties a secure method to access resources on a customer’s cloud account. WIF eliminates the need for service account keys, and is Google Cloud’s preferred method of credential authentication.

While service account keys can provide powerful access to your Google Cloud resources, they must be maintained by the end user and can be a security risk if they are not managed properly. WIF does not use service keys as an access method for your Google cloud resources. Instead, WIF grants access by using credentials from external identity providers to generate short-lived credentials for workloads. The workloads can then use these credentials to temporarily impersonate service accounts and access Google Cloud resources. This removes the burden of having to properly maintain service account keys, and removes the risk of unauthorized users gaining access to service account keys.

The following bulleted items provides a basic overview of the Workload Identity Federation process:

  • The owner of the Google Cloud Platform (GCP) project configures a workload identity pool with an identity provider, allowing OpenShift Dedicated to access the project’s associated service accounts using short-lived credentials.

  • This workload identity pool is configured to authenticate requests using an Identity Provider (IP) that the user defines.

  • For applications to get access to cloud resources, they first pass credentials to Google’s Security Token Service (STS). STS uses the specified identity provider to verify the credentials.

  • Once the credentials are verified, STS returns a temporary access token to the caller, giving the application the ability to impersonate the service account bound to that identity.

Operators also need access to cloud resources. By using WIF instead of service account keys to grant this access, cluster security is further strengthened, as service account keys are no longer stored in the cluster. Instead, operators are given temporary access tokens that impersonate the service accounts. These tokens are short-lived and regularly rotated.

For more information about Workload Identity Federation, refer to the Google Cloud Platform documentation.

Workload Identity Federation (WIF) is only supported on OpenShift Dedicated version 4.17 and later.

Prerequisites

  • You have confirmed your Google Cloud account has the necessary resource quotas and limits to support your desired cluster size according to the cluster resource requirements.

For more information regarding resource quotas and limits, see Additional resources.

Creating a Workload Identity Federation cluster using OpenShift Cluster Manager

Procedure
  1. Log in to OpenShift Cluster Manager and click Create cluster on the OpenShift Dedicated card.

  2. Under Billing model, configure the subscription type and infrastructure type.

    Workload Identity Federation is supported by the Customer Cloud Subscription (CCS) infrastructure type only.

    1. Select a subscription type. For information about OpenShift Dedicated subscription options, see Cluster subscriptions and registration in the OpenShift Cluster Manager documentation.

    2. Select the Customer cloud subscription infrastructure type.

    3. Click Next.

  3. Select Run on Google Cloud Platform.

  4. Select Workload Identity Federation as the Authentication type.

    1. Read and complete all the required prerequisites.

    2. Click the checkbox indicating that you have read and completed all the required prerequisites.

  5. To create a new WIF configuration, open a terminal window and run the following OCM CLI command.

    $ ocm gcp create wif-config --name <wif_name> \ (1)
      --project <gcp_project_id> \ (2)
    1 Replace <wif_name> with the name of your WIF configuration.
    2 Replace <gcp_project_id> with the ID of the Google Cloud Platform (GCP) project where the WIF configuration will be implemented.
  6. Select a configured WIF configuration from the WIF configuration drop-down list. If you want to select the WIF configuration you created in the last step, click Refresh first.

  7. Click Next.

  8. On the Details page, provide a name for your cluster and specify the cluster details:

    1. In the Cluster name field, enter a name for your cluster.

    2. Optional: Cluster creation generates a domain prefix as a subdomain for your provisioned cluster on openshiftapps.com. If the cluster name is less than or equal to 15 characters, that name is used for the domain prefix. If the cluster name is longer than 15 characters, the domain prefix is randomly generated as a 15-character string.

      To customize the subdomain prefix, select the Create custom domain prefix checkbox, and enter your domain prefix name in the Domain prefix field. The domain prefix cannot be longer than 15 characters, must be unique within your organization, and cannot be changed after cluster creation.

    3. Select a cluster version from the Version drop-down menu.

      Workload Identity Federation (WIF) is only supported on OpenShift Dedicated version 4.17 and later.

    4. Select a cloud provider region from the Region drop-down menu.

    5. Select a Single zone or Multi-zone configuration.

    6. Optional: Select Enable Secure Boot for Shielded VMs to use Shielded VMs when installing your cluster. For more information, see Shielded VMs.

      To successfully create a cluster, you must select Enable Secure Boot support for Shielded VMs if your organization has the policy constraint constraints/compute.requireShieldedVm enabled. For more information regarding GCP organizational policy constraints, see Organization policy constraints.

    7. Leave Enable user workload monitoring selected to monitor your own projects in isolation from Red Hat Site Reliability Engineer (SRE) platform metrics. This option is enabled by default.

  9. Optional: Expand Advanced Encryption to make changes to encryption settings.

    1. Select Use custom KMS keys to use custom KMS keys. If you prefer not to use custom KMS keys, leave the default setting Use default KMS Keys.

    2. With Use Custom KMS keys selected:

      1. Select a key ring location from the Key ring location drop-down menu.

      2. Select a key ring from the Key ring drop-down menu.

      3. Select a key name from the Key name drop-down menu.

      4. Provide the KMS Service Account.

    3. Optional: Select Enable FIPS cryptography if you require your cluster to be FIPS validated.

      If Enable FIPS cryptography is selected, Enable additional etcd encryption is enabled by default and cannot be disabled. You can select Enable additional etcd encryption without selecting Enable FIPS cryptography.

    4. Optional: Select Enable additional etcd encryption if you require etcd key value encryption. With this option, the etcd key values are encrypted, but not the keys. This option is in addition to the control plane storage encryption that encrypts the etcd volumes in OpenShift Dedicated clusters by default.

      By enabling etcd encryption for the key values in etcd, you incur a performance overhead of approximately 20%. The overhead is a result of introducing this second layer of encryption, in addition to the default control plane storage encryption that encrypts the etcd volumes. Consider enabling etcd encryption only if you specifically require it for your use case.

  10. Click Next.

  11. On the Machine pool page, select a Compute node instance type and a Compute node count. The number and types of nodes that are available depend on your OpenShift Dedicated subscription. If you are using multiple availability zones, the compute node count is per zone.

  12. Optional: Expand Add node labels to add labels to your nodes. Click Add additional label to add more node labels.

    This step refers to labels within Kubernetes, not Google Cloud. For more information regarding Kubernetes labels, see Labels and Selectors.

  13. Click Next.

  14. In the Cluster privacy dialog, select Public or Private to use either public or private API endpoints and application routes for your cluster. If you select Private, Use Private Service Connect is selected by default, and cannot be disabled. Private Service Connect (PSC) is Google Cloud’s security-enhanced networking feature.

  15. Optional: To install the cluster in an existing GCP Virtual Private Cloud (VPC):

    1. Select Install into an existing VPC.

      Private Service Connect is supported only with Install into an existing VPC.

    2. If you are installing into an existing VPC and you want to enable an HTTP or HTTPS proxy for your cluster, select Configure a cluster-wide proxy.

      In order to configure a cluster-wide proxy for your cluster, you must first create the Cloud network address translation (NAT) and a Cloud router. See the Additional resources section for more information.

  16. Accept the default application ingress settings, or to create your own custom settings, select Custom Settings.

    1. Optional: Provide route selector.

    2. Optional: Provide excluded namespaces.

    3. Select a namespace ownership policy.

    4. Select a wildcard policy.

      For more information about custom application ingress settings, click on the information icon provided for each setting.

  17. Click Next.

  18. Optional: To install the cluster into a GCP Shared VPC, follow these steps.

    The VPC owner of the host project must enable a project as a host project in their Google Cloud console and add the Computer Network Administrator, Compute Security Administrator, and DNS Administrator roles to the following service accounts prior to cluster installation:

    • osd-deployer

    • osd-control-plane

    • openshift-machine-api-gcp

    Failure to do so will cause the cluster go into the "Installation Waiting" state. If this occurs, you must contact the VPC owner of the host project to assign the roles to the service accounts listed above. The VPC owner of the host project has 30 days to grant the listed permissions before the cluster creation fails. For more information, see Enable a host project and Provision Shared VPC.

    1. Select Install into GCP Shared VPC.

    2. Specify the Host project ID. If the specified host project ID is incorrect, cluster creation fails.

    3. If you opted to install the cluster in an existing GCP VPC, provide your Virtual Private Cloud (VPC) subnet settings and select Next. You must have created the Cloud network address translation (NAT) and a Cloud router. See Additional resources for information about Cloud NATs and Google VPCs.

      If you are installing a cluster into a Shared VPC, the VPC name and subnets are shared from the host project.

  19. Click Next.

  20. If you opted to configure a cluster-wide proxy, provide your proxy configuration details on the Cluster-wide proxy page:

    1. Enter a value in at least one of the following fields:

      • Specify a valid HTTP proxy URL.

      • Specify a valid HTTPS proxy URL.

      • In the Additional trust bundle field, provide a PEM encoded X.509 certificate bundle. The bundle is added to the trusted certificate store for the cluster nodes. An additional trust bundle file is required if you use a TLS-inspecting proxy unless the identity certificate for the proxy is signed by an authority from the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle. This requirement applies regardless of whether the proxy is transparent or requires explicit configuration using the http-proxy and https-proxy arguments.

    2. Click Next.

      For more information about configuring a proxy with OpenShift Dedicated, see Configuring a cluster-wide proxy.

  21. In the CIDR ranges dialog, configure custom classless inter-domain routing (CIDR) ranges or use the defaults that are provided.

    CIDR configurations cannot be changed later. Confirm your selections with your network administrator before proceeding.

    If the cluster privacy is set to Private, you cannot access your cluster until you configure private connections in your cloud provider.

  22. On the Cluster update strategy page, configure your update preferences:

    1. Choose a cluster update method:

      • Select Individual updates if you want to schedule each update individually. This is the default option.

      • Select Recurring updates to update your cluster on your preferred day and start time, when updates are available.

        You can review the end-of-life dates in the update lifecycle documentation for OpenShift Dedicated. For more information, see OpenShift Dedicated update life cycle.

    2. Provide administrator approval based on your cluster update method:

      • Individual updates: If you select an update version that requires approval, provide an administrator’s acknowledgment and click Approve and continue.

      • Recurring updates: If you selected recurring updates for your cluster, provide an administrator’s acknowledgment and click Approve and continue. OpenShift Cluster Manager does not start scheduled y-stream updates for minor versions without receiving an administrator’s acknowledgment.

    3. If you opted for recurring updates, select a preferred day of the week and upgrade start time in UTC from the drop-down menus.

    4. Optional: You can set a grace period for Node draining during cluster upgrades. A 1 hour grace period is set by default.

    5. Click Next.

      In the event of critical security concerns that significantly impact the security or stability of a cluster, Red Hat Site Reliability Engineering (SRE) might schedule automatic updates to the latest z-stream version that is not impacted. The updates are applied within 48 hours after customer notifications are provided. For a description of the critical impact security rating, see Understanding Red Hat security ratings.

  23. Review the summary of your selections and click Create cluster to start the cluster installation. The installation takes approximately 30-40 minutes to complete.

  24. Optional: On the Overview tab, you can enable the delete protection feature by selecting Enable, which is located directly under Delete Protection: Disabled. This will prevent your cluster from being deleted. To disable delete protection, select Disable. By default, clusters are created with the delete protection feature disabled.

    Verification
    • You can monitor the progress of the installation in the Overview page for your cluster. You can view the installation logs on the same page. Your cluster is ready when the Status in the Details section of the page is listed as Ready.

Creating a Workload Identity Federation cluster using the OCM CLI

You can create an OpenShift Dedicated on Google Cloud Platform (GCP) cluster with Workload Identity Federation (WIF) using the OpenShift Cluster Manager CLI (ocm) in interactive or non-interactive mode.

To create a WIF-enabled cluster, the OpenShift Cluster Manager CLI (ocm) must be version 1.0.2 or greater.

Before creating the cluster, you must first create a WIF configuration.

Migrating an existing non-WIF cluster to a WIF configuration is not supported. This feature can only be enabled during new cluster creation.

Creating a WIF configuration

Procedure

You can create a WIF configuration using the auto mode or the manual mode.

The auto mode enables you to automatically create the service accounts for OpenShift Dedicated components as well as other IAM resources.

Alternatively, you can use the manual mode. In manual mode, you are provided with commands within a script.sh file which you use to manually create the service accounts for OpenShift Dedicated components as well as other IAM resources.

  • Based on your mode preference, run one of the following commands to create a WIF configuration:

    • Create a WIF configuration in auto mode by running the following command:

      $ ocm gcp create wif-config --name <wif_name> \ (1)
        --project <gcp_project_id> \ (2)
      1 Replace <wif_name> with the name of your WIF configuration.
      2 Replace <gcp_project_id> with the ID of the Google Cloud Platform (GCP) project where the WIF configuration will be implemented.
      Example output
      2024/09/26 13:05:41 Creating workload identity configuration...
      2024/09/26 13:05:47 Workload identity pool created with name 2e1kcps6jtgla8818vqs8tbjjls4oeub
      2024/09/26 13:05:47 workload identity provider created with name oidc
      2024/09/26 13:05:48 IAM service account osd-worker-oeub created
      2024/09/26 13:05:49 IAM service account osd-control-plane-oeub created
      2024/09/26 13:05:49 IAM service account openshift-gcp-ccm-oeub created
      2024/09/26 13:05:50 IAM service account openshift-gcp-pd-csi-driv-oeub created
      2024/09/26 13:05:50 IAM service account openshift-image-registry-oeub created
      2024/09/26 13:05:51 IAM service account openshift-machine-api-gcp-oeub created
      2024/09/26 13:05:51 IAM service account osd-deployer-oeub created
      2024/09/26 13:05:52 IAM service account cloud-credential-operator-oeub created
      2024/09/26 13:05:52 IAM service account openshift-cloud-network-c-oeub created
      2024/09/26 13:05:53 IAM service account openshift-ingress-gcp-oeub created
      2024/09/26 13:05:55 Role "osd_deployer_v4.17" updated
    • Create a WIF configuration in manual mode by running the following command:

      $ ocm gcp create wif-config --name <wif_name> \ (1)
        --project <gcp_project_id> \ (2)
        --mode=manual
      1 Replace <wif_name> with the name of your WIF configuration.
      2 Replace <gcp_project_id> with the ID of the Google Cloud Platform (GCP) project where the WIF configuration will be implemented.

      Once the WIF is configured, the following service accounts, roles, and groups are created.

      Table 1. WIF configuration service accounts, group and roles
      Service Account/Group GCP pre-defined roles and Red Hat custom roles

      osd-deployer

      osd_deployer_v4.17

      osd-control-plane

      • compute.instanceAdmin

      • compute.networkAdmin

      • compute.securityAdmin

      • compute.storageAdmin

      osd-worker

      • compute.storageAdmin

      • compute.viewer

      cloud-credential-operator-gcp-ro-creds

      cloud_credential_operator_gcp_ro_creds_v4.17

      openshift-cloud-network-config-controller-gcp

      openshift_cloud_network_config_controller_gcp_v4.17

      openshift-gcp-ccm

      openshift_gcp_ccm_v4.17

      openshift-gcp-pd-csi-driver-operator

      • compute.storageAdmin

      • iam.serviceAccountUser

      • resourcemanager.tagUser

      • openshift_gcp_pd_csi_driver_operator_v4.17

      openshift-image-registry-gcp

      openshift_image_registry_gcs_v4.17

      openshift-ingress-gcp

      openshift_ingress_gcp_v4.17

      openshift-machine-api-gcp

      openshift_machine_api_gcp_v4.17

      Access via SRE group:sd-sre-platform-gcp-access

      sre_managed_support

For further details about WIF configuration roles and their assigned permissions, see managed-cluster-config.

Creating a WIF cluster

Procedure

You can create a WIF cluster using the interactive mode or the non-interactive mode.

In interactive mode, cluster attributes are displayed automatically as prompts during the creation of the cluster. You enter the values for those prompts based on specified requirements in the fields provided.

In non-interactive mode, you specify the values for specific parameters within the command.

  • Based on your mode preference, run one of the following commands to create an OpenShift Dedicated on (GCP) cluster with WIF configuration:

    • Create a cluster in interactive mode by running the following command:

      $ ocm create cluster --interactive (1)
      1 interactive mode enables you to specify configuration options at the interactive prompts.
    • Create a cluster in non-interactive mode by running the following command:

      The following example is made up optional and required parameters and may differ from your non-interactive mode command. Parameters not identified as optional are required. For additional details about these and other parameters, run the ocm create cluster --help flag command in you terminal window.

      $ ocm create cluster <cluster_name> \ (1)
      --provider=gcp \ (2)
      --ccs=true \ (3)
      --wif-config <wif_name> \ (4)
      --region <gcp_region> \ (5)
      --subscription-type=marketplace-gcp \ (6)
      --marketplace-gcp-terms=true \ (7)
      --version <version> \ (8)
      --multi-az=true  \ (9)
      --enable-autoscaling=true \ (10)
      --min-replicas=3 \ (11)
      --max-replicas=6 \ (12)
      --secure-boot-for-shielded-vms=true (13)
      
      1 Replace <cluster_name> with a name for your cluster.
      2 Set value to gcp.
      3 Set value to true.
      4 Replace <wif_name> with the name of your WIF configuration.
      5 Replace <gcp_region> with the Google Cloud Platform (GCP) region where the new cluster will be deployed.
      6 Optional: The subscription billing model for the cluster.
      7 Optional: If you provided a value of marketplace-gcp for the subscription-type parameter, marketplace-gcp-terms must be equal to true.
      8 Optional: The desired OpenShift version.
      9 Optional: Deploy to multiple data centers.
      10 Optional: Enable autoscaling of compute nodes.
      11 Optional: Minimum number of compute nodes.
      12 Optional: Maximum number of compute nodes.
      13 Optional: Secure Boot enables the use of Shielded VMs in the Google Cloud Platform.

Updating a WIF configuration

Updating a WIF configuration is only applicable for y-stream updates. For an overview of the update process, including details regarding version semantics, see The Ultimate Guide to OpenShift Release and Upgrade Process for Cluster Administrators.

Before updating a WIF-enabled OpenShift Dedicated cluster to a newer version, you must update the wif-config to that version as well. If you do not update the wif-config version before attempting to update the cluster version, the cluster version update will fail.

You can update a wif-config to a specific OpenShift Dedicated version by running the following command:

ocm gcp update wif-config --version <version> \ (1)
--name <wif_name> (2)
1 Replace <version> with the OpenShift Dedicated y-stream version you plan to update the cluster to.
2 Replace <wif_name> with the name of the WIF configuration you want to update.

Additional resources