×

In OpenShift Container Platform version 4.10, you can install a cluster on Microsoft Azure Stack Hub by using infrastructure that you provide.

Several Azure Resource Manager (ARM) templates are provided to assist in completing these steps or to help model your own.

The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the cloud provider and the installation process of OpenShift Container Platform. Several ARM templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods; the templates are just an example.

Prerequisites

Internet access for OpenShift Container Platform

In OpenShift Container Platform 4.10, you require access to the internet to install your cluster.

You must have internet access to:

  • Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.

  • Access Quay.io to obtain the packages that are required to install your cluster.

  • Obtain the packages that are required to perform cluster updates.

If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.

Configuring your Azure Stack Hub project

Before you can install OpenShift Container Platform, you must configure an Azure project to host it.

All Azure Stack Hub resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure Stack Hub restricts, see Resolve reserved resource name errors in the Azure documentation.

Azure Stack Hub account limits

The OpenShift Container Platform cluster uses a number of Microsoft Azure Stack Hub components, and the default Quota types in Azure Stack Hub affect your ability to install OpenShift Container Platform clusters.

The following table summarizes the Azure Stack Hub components whose limits can impact your ability to install and run OpenShift Container Platform clusters.

Component Number of components required by default Description

vCPU

56

A default cluster requires 56 vCPUs, so you must increase the account limit.

By default, each cluster creates the following instances:

  • One bootstrap machine, which is removed after installation

  • Three control plane machines

  • Three compute machines

Because the bootstrap, control plane, and worker machines use Standard_DS4_v2 virtual machines, which use 8 vCPUs, a default cluster requires 56 vCPUs. The bootstrap node VM is used only during installation.

To deploy more worker nodes, enable autoscaling, deploy large workloads, or use a different instance type, you must further increase the vCPU limit for your account to ensure that your cluster can deploy the machines that you require.

VNet

1

Each default cluster requires one Virtual Network (VNet), which contains two subnets.

Network interfaces

7

Each default cluster requires seven network interfaces. If you create more machines or your deployed workloads create load balancers, your cluster uses more network interfaces.

Network security groups

2

Each cluster creates network security groups for each subnet in the VNet. The default cluster creates network security groups for the control plane and for the compute node subnets:

controlplane

Allows the control plane machines to be reached on port 6443 from anywhere

node

Allows worker nodes to be reached from the internet on ports 80 and 443

Network load balancers

3

Each cluster creates the following load balancers:

default

Public IP address that load balances requests to ports 80 and 443 across worker machines

internal

Private IP address that load balances requests to ports 6443 and 22623 across control plane machines

external

Public IP address that load balances requests to port 6443 across control plane machines

If your applications create more Kubernetes LoadBalancer service objects, your cluster uses more load balancers.

Public IP addresses

2

The public load balancer uses a public IP address. The bootstrap machine also uses a public IP address so that you can SSH into the machine to troubleshoot issues during installation. The IP address for the bootstrap node is used only during installation.

Private IP addresses

7

The internal load balancer, each of the three control plane machines, and each of the three worker machines each use a private IP address.

Configuring a DNS zone in Azure Stack Hub

To successfully install OpenShift Container Platform on Azure Stack Hub, you must create DNS records in an Azure Stack Hub DNS zone. The DNS zone must be authoritative for the domain. To delegate a registrar’s DNS zone to Azure Stack Hub, see Microsoft’s documentation for Azure Stack Hub datacenter DNS integration.

You can view Azure’s DNS solution by visiting this example for creating DNS zones.

Certificate signing requests management

Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them.

Required Azure Stack Hub roles

Your Microsoft Azure Stack Hub account must have the following roles for the subscription that you use:

  • Owner

To set roles on the Azure portal, see the Manage access to resources in Azure Stack Hub with role-based access control in the Microsoft documentation.

Creating a service principal

Because OpenShift Container Platform and its installation program create Microsoft Azure resources by using the Azure Resource Manager, you must create a service principal to represent it.

Prerequisites
  • Install or update the Azure CLI.

  • Your Azure account has the required roles for the subscription that you use.

Procedure
  1. Register your environment:

    $ az cloud register -n AzureStackCloud --endpoint-resource-manager <endpoint> (1)
    1 Specify the Azure Resource Manager endpoint, `https://management.<region>.<fqdn>/`.

    See the Microsoft documentation for details.

  2. Set the active environment:

    $ az cloud set -n AzureStackCloud
  3. Update your environment configuration to use the specific API version for Azure Stack Hub:

    $ az cloud update --profile 2019-03-01-hybrid
  4. Log in to the Azure CLI:

    $ az login

    If you are in a multitenant environment, you must also supply the tenant ID.

  5. If your Azure account uses subscriptions, ensure that you are using the right subscription:

    1. View the list of available accounts and record the tenantId value for the subscription you want to use for your cluster:

      $ az account list --refresh
      Example output
      [
        {
          "cloudName": AzureStackCloud",
          "id": "9bab1460-96d5-40b3-a78e-17b15e978a80",
          "isDefault": true,
          "name": "Subscription Name",
          "state": "Enabled",
          "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee",
          "user": {
            "name": "you@example.com",
            "type": "user"
          }
        }
      ]
    2. View your active account details and confirm that the tenantId value matches the subscription you want to use:

      $ az account show
      Example output
      {
        "environmentName": AzureStackCloud",
        "id": "9bab1460-96d5-40b3-a78e-17b15e978a80",
        "isDefault": true,
        "name": "Subscription Name",
        "state": "Enabled",
        "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", (1)
        "user": {
          "name": "you@example.com",
          "type": "user"
        }
      }
      1 Ensure that the value of the tenantId parameter is the correct subscription ID.
    3. If you are not using the right subscription, change the active subscription:

      $ az account set -s <subscription_id> (1)
      1 Specify the subscription ID.
    4. Verify the subscription ID update:

      $ az account show
      Example output
      {
        "environmentName": AzureStackCloud",
        "id": "33212d16-bdf6-45cb-b038-f6565b61edda",
        "isDefault": true,
        "name": "Subscription Name",
        "state": "Enabled",
        "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee",
        "user": {
          "name": "you@example.com",
          "type": "user"
        }
      }
  6. Record the tenantId and id parameter values from the output. You need these values during the OpenShift Container Platform installation.

  7. Create the service principal for your account:

    $ az ad sp create-for-rbac --role Contributor --name <service_principal> \ (1)
      --scopes /subscriptions/<subscription_id> (2)
    
    1 Specify the service principal name.
    2 Specify the subscription ID.
    Example output
    Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>'
    The output includes credentials that you must protect. Be sure that you do not
    include these credentials in your code or check the credentials into your source
    control. For more information, see https://aka.ms/azadsp-cli
    {
      "appId": "ac461d78-bf4b-4387-ad16-7e32e328aec6",
      "displayName": <service_principal>",
      "password": "00000000-0000-0000-0000-000000000000",
      "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee"
    }
  8. Record the values of the appId and password parameters from the previous output. You need these values during OpenShift Container Platform installation.

Additional resources

Obtaining the installation program

Before you install OpenShift Container Platform, download the installation file on a local computer.

Prerequisites
  • You have a computer that runs Linux or macOS, with 500 MB of local disk space

Procedure
  1. Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.

  2. Select Azure as the cloud provider.

  3. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files.

    The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.

    Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.

  4. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:

    $ tar -xvf openshift-install-linux.tar.gz
  5. Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.

Generating a key pair for cluster node SSH access

During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication.

After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.

If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes.

Do not skip this procedure in production environments, where disaster recovery and debugging is required.

You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.

Procedure
  1. If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:

    $ ssh-keygen -t ed25519 -N '' -f <path>/<file_name> (1)
    1 Specify the path and file name, such as ~/.ssh/id_ed25519, of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory.

    If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm.

  2. View the public SSH key:

    $ cat <path>/<file_name>.pub

    For example, run the following to view the ~/.ssh/id_ed25519.pub public key:

    $ cat ~/.ssh/id_ed25519.pub
  3. Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command.

    On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically.

    1. If the ssh-agent process is not already running for your local user, start it as a background task:

      $ eval "$(ssh-agent -s)"
      Example output
      Agent pid 31874

      If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.

  4. Add your SSH private key to the ssh-agent:

    $ ssh-add <path>/<file_name> (1)
    1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519
    Example output
    Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
Next steps
  • When you install OpenShift Container Platform, provide the SSH public key to the installation program.

Creating the installation files for Azure Stack Hub

To install OpenShift Container Platform on Microsoft Azure Stack Hub using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You manually create the install-config.yaml file, and then generate and customize the Kubernetes manifests and Ignition config files. You also have the option to first set up a separate var partition during the preparation phases of installation.

Manually creating the installation configuration file

Prerequisites
  • You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery.

  • You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster.

Procedure
  1. Create an installation directory to store your required installation assets in:

    $ mkdir <installation_directory>

    You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.

  2. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory>.

    You must name this configuration file install-config.yaml.

    Make the following modifications for Azure Stack Hub:

    1. Set the replicas parameter to 0 for the compute pool:

      compute:
      - hyperthreading: Enabled
        name: worker
        platform: {}
        replicas: 0 (1)
      1 Set to 0.

      The compute machines will be provisioned manually later.

    2. Update the platform.azure section of the install-config.yaml file to configure your Azure Stack Hub configuration:

      platform:
        azure:
          armEndpoint: <azurestack_arm_endpoint> (1)
          baseDomainResourceGroupName: <resource_group> (2)
          cloudName: AzureStackCloud (3)
          region: <azurestack_region> (4)
      1 Specify the Azure Resource Manager endpoint of your Azure Stack Hub environment, like https://management.local.azurestack.external.
      2 Specify the name of the resource group that contains the DNS zone for your base domain.
      3 Specify the Azure Stack Hub environment, which is used to configure the Azure SDK with the appropriate Azure API endpoints.
      4 Specify the name of your Azure Stack Hub region.
  3. Back up the install-config.yaml file so that you can use it to install multiple clusters.

    The install-config.yaml file is consumed during the next step of the installation process. You must back it up now.

Sample customized install-config.yaml file for Azure Stack Hub

You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.

This sample YAML file is provided for reference only. Use it as a resource to enter parameter values into the installation configuration file that you created manually.

apiVersion: v1
baseDomain: example.com
controlPlane: (1)
  name: master
  platform:
    azure:
      osDisk:
        diskSizeGB: 1024 (2)
        diskType: premium_LRS
  replicas: 3
compute: (1)
- name: worker
  platform:
    azure:
      osDisk:
        diskSizeGB: 512 (2)
        diskType: premium_LRS
  replicas: 0
metadata:
  name: test-cluster (3)
networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  machineNetwork:
  - cidr: 10.0.0.0/16
  networkType: OpenShiftSDN
  serviceNetwork:
  - 172.30.0.0/16
platform:
  azure:
    armEndpoint: azurestack_arm_endpoint (4)
    baseDomainResourceGroupName: resource_group (5)
    region: azure_stack_local_region (6)
    resourceGroupName: existing_resource_group (7)
    outboundType: Loadbalancer
    cloudName: AzureStackCloud (8)
pullSecret: '{"auths": ...}' (9)
fips: false (10)
additionalTrustBundle: | (11)
    -----BEGIN CERTIFICATE-----
    <MY_TRUSTED_CA_CERT>
    -----END CERTIFICATE-----
sshKey: ssh-ed25519 AAAA... (12)
1 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, -, and the first line of the controlPlane section must not. Only one control plane pool is used.
2 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB.
3 Specify the name of the cluster.
4 Specify the Azure Resource Manager endpoint that your Azure Stack Hub operator provides.
5 Specify the name of the resource group that contains the DNS zone for your base domain.
6 Specify the name of your Azure Stack Hub local region.
7 Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster.
8 Specify the Azure Stack Hub environment as your target platform.
9 Specify the pull secret required to authenticate your cluster.
10 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.

The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture.

11 If your Azure Stack Hub environment uses an internal certificate authority (CA), add the necessary certificate bundle in .pem format.
12 You can optionally provide the sshKey value that you use to access the machines in your cluster.