In OpenShift Container Platform version 4.2, you can install a cluster on Amazon Web Services (AWS) by using infrastructure that you provide.

One way to create this infrastructure is to use the provided CloudFormation templates. You can modify the templates to customize your infrastructure or use the information that they contain to create AWS objects according to your company’s policies.

Prerequisites

Internet and Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 4.2, you require access to the internet to install and entitle your cluster. The Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, also requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to the Red Hat OpenShift Cluster Manager. From there, you can allocate entitlements to your cluster.

You must have internet access to:

  • Access the Red Hat OpenShift Cluster Manager page to download the installation program and perform subscription management and entitlement. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. If the Telemetry service cannot entitle your cluster, you must manually entitle it on the Cluster registration page.

  • Access Quay.io to obtain the packages that are required to install your cluster.

  • Obtain the packages that are required to perform cluster updates.

If your cluster cannot have direct internet access, you can perform a restricted network installation on infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.

Required AWS infrastructure components

To install OpenShift Container Platform on user-provisioned infrastructure in Amazon Web Services (AWS), you must manually create both the machines and their supporting infrastructure.

For more information about the integration testing for different platforms, see the OpenShift Container Platform 4.x Tested Integrations page.

You can use the provided CloudFormation templates to create this infrastructure, you can manually create the components, or you can reuse existing infrastructure that meets the cluster requirements. Review the CloudFormation templates for more details about how the components interrelate.

Cluster machines

You need AWS::EC2::Instance objects for the following machines:

  • A bootstrap machine. This machine is required during installation, but you can remove it after your cluster deploys.

  • At least three control plane machines. The control plane machines are not governed by a MachineSet.

  • Compute machines. You must create at least two compute, or worker, machines during installation. These machines are not governed by a MachineSet.

You can use the following instance types for the cluster machines with the provided CloudFormation templates.

If m4 instance types are not available in your region, such as with eu-west-3, use m5 types instead.

Table 1. Instance types for machines
Instance type Bootstrap Control plane Compute

i3.large

x

m4.large or m5.large

x

m4.xlarge or m5.xlarge

x

x

m4.2xlarge

x

x

m4.4xlarge

x

x

m4.8xlarge

x

x

m4.10xlarge

x

x

m4.16xlarge

x

x

c4.large

x

c4.xlarge

x

c4.2xlarge

x

x

c4.4xlarge

x

x

c4.8xlarge

x

x

r4.large

x

r4.xlarge

x

x

r4.2xlarge

x

x

r4.4xlarge

x

x

r4.8xlarge

x

x

r4.16xlarge

x

x

You might be able to use other instance types that meet the specifications of these instance types.

Other infrastructure components

  • A VPC

  • DNS entries

  • Load balancers and listeners

  • A public and a private Route53 zone

  • Security groups

  • IAM roles

  • S3 buckets

Required VPC components

You must provide a suitable VPC and subnets that allow communication to your machines.

Component AWS type Description

VPC

  • AWS::EC2::VPC

  • AWS::EC2::VPCEndpoint

You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3.

Public subnets

  • AWS::EC2::Subnet

  • AWS::EC2::SubnetNetworkAclAssociation

Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules.

Internet gateway

  • AWS::EC2::InternetGateway

  • AWS::EC2::VPCGatewayAttachment

  • AWS::EC2::RouteTable

  • AWS::EC2::Route

  • AWS::EC2::SubnetRouteTableAssociation

  • AWS::EC2::NatGateway

  • AWS::EC2::EIP

You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private-subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios.

Network access control

  • AWS::EC2::NetworkAcl

  • AWS::EC2::NetworkAclEntry

You must allow the VPC to access the following ports:

Port

Reason

80

Inbound HTTP traffic

443

Inbound HTTPS traffic

22

Inbound SSH traffic

1024 - 65535

Inbound ephemeral traffic

0 - 65535

Outbound ephemeral traffic

Private subnets

  • AWS::EC2::Subnet

  • AWS::EC2::RouteTable

  • AWS::EC2::SubnetRouteTableAssociation

Your VPC can have a private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them.

Required DNS and load balancing components

Your DNS and load balancer configuration needs to use a public hosted zone and can use a private hosted zone similar to the one that the installation program uses if it provisions the cluster’s infrastructure. You must create a DNS entry that resolves to your load balancer. An entry for api.<cluster_name>.<domain> must point to the external load balancer, and an entry for api-int.<cluster_name>.<domain> must point to the internal load balancer.

The cluster also requires load balancers and listeners for port 6443, which are required for the Kubernetes API and its extensions, and port 22623, which are required for the Ignition config files for new machines. The targets will be the master nodes. Port 6443 must be accessible to both clients external to the cluster and nodes within the cluster. Port 22623 must be accessible to nodes within the cluster.

Component AWS type Description

DNS

AWS::Route53::HostedZone

The hosted zone for your internal DNS.

etcd record sets

AWS::Route53::RecordSet

The registration records for etcd for your control plane machines.

Public load balancer

AWS::ElasticLoadBalancingV2::LoadBalancer

The load balancer for your public subnets.

External API server record

AWS::Route53::RecordSetGroup

Alias records for the external API server.

External listener

AWS::ElasticLoadBalancingV2::Listener

A listener on port 6443 for the external load balancer.

External target group

AWS::ElasticLoadBalancingV2::TargetGroup

The target group for the external load balancer.

Private load balancer

AWS::ElasticLoadBalancingV2::LoadBalancer

The load balancer for your private subnets.

Internal API server record

AWS::Route53::RecordSetGroup

Alias records for the internal API server.

Internal listener

AWS::ElasticLoadBalancingV2::Listener

A listener on port 22623 for the internal load balancer.

Internal target group

AWS::ElasticLoadBalancingV2::TargetGroup

The target group for the Internal load balancer.

Internal listener

AWS::ElasticLoadBalancingV2::Listener

A listener on port 6443 for the internal load balancer.

Internal target group

AWS::ElasticLoadBalancingV2::TargetGroup

The target group for the internal load balancer.

Security groups

The control plane and worker machines require access to the following ports:

Group Type IP Protocol Port range

MasterSecurityGroup

AWS::EC2::SecurityGroup

icmp

0

tcp

22

tcp

6443

tcp

22623

WorkerSecurityGroup

AWS::EC2::SecurityGroup

icmp

0

tcp

22

BootstrapSecurityGroup

AWS::EC2::SecurityGroup

tcp

22

tcp

19531

Control plane Ingress

The control plane machines require the following Ingress groups. Each Ingress group is a AWS::EC2::SecurityGroupIngress resource.

Ingress group Description IP protocol Port range

MasterIngressEtcd

etcd

tcp

2379- 2380

MasterIngressVxlan

Vxlan packets

udp

4789

MasterIngressWorkerVxlan

Vxlan packets

udp

4789

MasterIngressInternal

Internal cluster communication

tcp

9000 - 9999

MasterIngressWorkerInternal

Internal cluster communication

tcp

9000 - 9999

MasterIngressKube

Kubernetes kubelet, scheduler and controller manager

tcp

10250 - 10259

MasterIngressWorkerKube

Kubernetes kubelet, scheduler and controller manager

tcp

10250 - 10259

MasterIngressIngressServices

Kubernetes Ingress services

tcp

30000 - 32767

MasterIngressWorkerIngressServices

Kubernetes Ingress services

tcp

30000 - 32767

Worker Ingress

The worker machines require the following Ingress groups. Each Ingress group is a AWS::EC2::SecurityGroupIngress resource.

Ingress group Description IP protocol Port range

WorkerIngressVxlan

Vxlan packets

udp

4789

WorkerIngressWorkerVxlan

Vxlan packets

udp

4789

WorkerIngressInternal

Internal cluster communication

tcp

9000 - 9999

WorkerIngressWorkerInternal

Internal cluster communication

tcp

9000 - 9999

WorkerIngressKube

Kubernetes kubelet, scheduler and controller manager

tcp

10250

WorkerIngressWorkerKube

Kubernetes kubelet, scheduler and controller manager

tcp

10250

WorkerIngressIngressServices

Kubernetes Ingress services

tcp

30000 - 32767

WorkerIngressWorkerIngressServices

Kubernetes Ingress services

tcp

30000 - 32767

Roles and instance profiles

You must grant the machines permissions in AWS. The provided CloudFormation templates grant the machines permission the following AWS::IAM::Role objects and provide a AWS::IAM::InstanceProfile for each set of roles. If you do not use the templates, you can grant the machines the following broad permissions or the following individual permissions.

Role Effect Action Resource

Master

Allow

ec2:*

*

Allow

elasticloadbalancing:*

*

Allow

iam:PassRole

*

Allow

s3:GetObject

*

Worker

Allow

ec2:Describe*

*

Bootstrap

Allow

ec2:Describe*

*

Allow

ec2:AttachVolume

*

Allow

ec2:DetachVolume

*

Required AWS permissions

When you attach the AdministratorAccess policy to the IAM user that you create, you grant that user all of the required permissions. To deploy an OpenShift Container Platform cluster, the IAM user requires the following permissions:

Required EC2 permissions for installation
  • ec2:AllocateAddress

  • ec2:AssociateAddress

  • ec2:AssociateDhcpOptions

  • ec2:AssociateRouteTable

  • ec2:AttachInternetGateway

  • ec2:AuthorizeSecurityGroupEgress

  • ec2:AuthorizeSecurityGroupIngress

  • ec2:CopyImage

  • ec2:CreateDhcpOptions

  • ec2:CreateInternetGateway

  • ec2:CreateNatGateway

  • ec2:CreateRoute

  • ec2:CreateRouteTable

  • ec2:CreateSecurityGroup

  • ec2:CreateSubnet

  • ec2:CreateTags

  • ec2:CreateVpc

  • ec2:CreateVpcEndpoint

  • ec2:CreateVolume

  • ec2:DescribeAccountAttributes

  • ec2:DescribeAddresses

  • ec2:DescribeAvailabilityZones

  • ec2:DescribeDhcpOptions

  • ec2:DescribeImages

  • ec2:DescribeInstanceAttribute

  • ec2:DescribeInstanceCreditSpecifications

  • ec2:DescribeInstances

  • ec2:DescribeInternetGateways

  • ec2:DescribeKeyPairs

  • ec2:DescribeNatGateways

  • ec2:DescribeNetworkAcls

  • ec2:DescribePrefixLists

  • ec2:DescribeRegions

  • ec2:DescribeRouteTables

  • ec2:DescribeSecurityGroups

  • ec2:DescribeSubnets

  • ec2:DescribeTags

  • ec2:DescribeVpcEndpoints

  • ec2:DescribeVpcs

  • ec2:DescribeVpcAttribute

  • ec2:DescribeVolumes

  • ec2:DescribeVpcClassicLink

  • ec2:DescribeVpcClassicLinkDnsSupport

  • ec2:ModifyInstanceAttribute

  • ec2:ModifySubnetAttribute

  • ec2:ModifyVpcAttribute

  • ec2:RevokeSecurityGroupEgress

  • ec2:RunInstances

  • ec2:TerminateInstances

  • ec2:RevokeSecurityGroupIngress

  • ec2:ReplaceRouteTableAssociation

  • ec2:DescribeNetworkInterfaces

  • ec2:ModifyNetworkInterfaceAttribute

Required Elasticloadbalancing permissions for installation
  • elasticloadbalancing:AddTags

  • elasticloadbalancing:ApplySecurityGroupsToLoadBalancer

  • elasticloadbalancing:AttachLoadBalancerToSubnets

  • elasticloadbalancing:CreateListener

  • elasticloadbalancing:CreateLoadBalancer

  • elasticloadbalancing:CreateLoadBalancerListeners

  • elasticloadbalancing:CreateTargetGroup

  • elasticloadbalancing:ConfigureHealthCheck

  • elasticloadbalancing:DeregisterInstancesFromLoadBalancer

  • elasticloadbalancing:DeregisterTargets

  • elasticloadbalancing:DescribeInstanceHealth

  • elasticloadbalancing:DescribeListeners

  • elasticloadbalancing:DescribeLoadBalancers

  • elasticloadbalancing:DescribeLoadBalancerAttributes

  • elasticloadbalancing:DescribeTags

  • elasticloadbalancing:DescribeTargetGroupAttributes

  • elasticloadbalancing:DescribeTargetHealth

  • elasticloadbalancing:ModifyLoadBalancerAttributes

  • elasticloadbalancing:ModifyTargetGroup

  • elasticloadbalancing:ModifyTargetGroupAttributes

  • elasticloadbalancing:RegisterTargets

  • elasticloadbalancing:RegisterInstancesWithLoadBalancer

  • elasticloadbalancing:SetLoadBalancerPoliciesOfListener

Required IAM permissions for installation
  • iam:AddRoleToInstanceProfile

  • iam:CreateInstanceProfile

  • iam:CreateRole

  • iam:DeleteInstanceProfile

  • iam:DeleteRole

  • iam:DeleteRolePolicy

  • iam:GetInstanceProfile

  • iam:GetRole

  • iam:GetRolePolicy

  • iam:GetUser

  • iam:ListInstanceProfilesForRole

  • iam:ListRoles

  • iam:ListUsers

  • iam:PassRole

  • iam:PutRolePolicy

  • iam:RemoveRoleFromInstanceProfile

  • iam:SimulatePrincipalPolicy

  • iam:TagRole

Required Route53 permissions for installation
  • route53:ChangeResourceRecordSets

  • route53:ChangeTagsForResource

  • route53:GetChange

  • route53:GetHostedZone

  • route53:CreateHostedZone

  • route53:ListHostedZones

  • route53:ListHostedZonesByName

  • route53:ListResourceRecordSets

  • route53:ListTagsForResource

  • route53:UpdateHostedZoneComment

Required S3 permissions for installation
  • s3:CreateBucket

  • s3:DeleteBucket

  • s3:GetAccelerateConfiguration

  • s3:GetBucketCors

  • s3:GetBucketLocation

  • s3:GetBucketLogging

  • s3:GetBucketObjectLockConfiguration

  • s3:GetBucketReplication

  • s3:GetBucketRequestPayment

  • s3:GetBucketTagging

  • s3:GetBucketVersioning

  • s3:GetBucketWebsite

  • s3:GetEncryptionConfiguration

  • s3:GetLifecycleConfiguration

  • s3:GetReplicationConfiguration

  • s3:ListBucket

  • s3:PutBucketAcl

  • s3:PutBucketTagging

  • s3:PutEncryptionConfiguration

S3 permissions that cluster Operators require
  • s3:PutObject

  • s3:PutObjectAcl

  • s3:PutObjectTagging

  • s3:GetObject

  • s3:GetObjectAcl

  • s3:GetObjectTagging

  • s3:GetObjectVersion

  • s3:DeleteObject

All additional permissions that are required to uninstall a cluster
  • autoscaling:DescribeAutoScalingGroups

  • ec2:DeleteDhcpOptions

  • ec2:DeleteInternetGateway

  • ec2:DeleteNatGateway

  • ec2:DeleteNetworkInterface

  • ec2:DeleteRoute

  • ec2:DeleteRouteTable

  • ec2:DeleteSnapshot

  • ec2:DeleteSecurityGroup

  • ec2:DeleteSubnet

  • ec2:DeleteVolume

  • ec2:DeleteVpc

  • ec2:DeleteVpcEndpoints

  • ec2:DeregisterImage

  • ec2:DetachInternetGateway

  • ec2:DisassociateRouteTable

  • ec2:ReleaseAddress

  • elasticloadbalancing:DescribeTargetGroups

  • elasticloadbalancing:DeleteTargetGroup

  • elasticloadbalancing:DeleteLoadBalancer

  • iam:ListInstanceProfiles

  • iam:ListRolePolicies

  • iam:ListUserPolicies

  • route53:DeleteHostedZone

  • tag:GetResources

Obtaining the installation program

Before you install OpenShift Container Platform, download the installation file on a local computer.

Prerequisites
  • You must install the cluster from a computer that uses Linux or macOS.

  • You need 500 MB of local disk space to download the installation program.

Procedure
  1. Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.

  2. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files.

    The installation program creates several files on the computer that you use to install your cluster. You must keep both the installation program and the files that the installation program creates after you finish installing the cluster.

  3. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:

    $ tar xvf <installation_program>.tar.gz
  4. From the Pull Secret page on the Red Hat OpenShift Cluster Manager site, download your installation pull secret as a .txt file or copy it to your clipboard. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.

Generating an SSH private key and adding it to the agent

If you want to perform installation debugging or disaster recovery on your cluster, you must provide an SSH key to both your ssh-agent and to the installation program.

You can use this key to SSH into the master nodes as the user core. When you deploy the cluster, the key is added to the core user’s ~/.ssh/authorized_keys list.

You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.

Procedure
  1. If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command:

    $ ssh-keygen -t rsa -b 4096 -N '' \
        -f <path>/<file_name> (1)
    1 Specify the path and file name, such as ~/.ssh/id_rsa, of the SSH key.

    Running this command generates an SSH key that does not require a password in the location that you specified.

  2. Start the ssh-agent process as a background task:

    $ eval "$(ssh-agent -s)"
    
    Agent pid 31874
  3. Add your SSH private key to the ssh-agent:

    $ ssh-add <path>/<file_name> (1)
    
    Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
    1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa
Next steps
  • When you install OpenShift Container Platform, provide the SSH public key to the installation program. If you install a cluster on infrastructure that you provision, you must provide this key to your cluster’s machines.

Creating the installation files for AWS

To install OpenShift Container Platform on Amazon Web Services (AWS) using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You generate and customize the install-config.yaml file, Kubernetes manifests, and Ignition config files.

Creating the installation configuration file

Generate and customize the installation configuration file that the installation program needs to deploy your cluster.

Prerequisites
  • Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.

Procedure
  1. Obtain the install-config.yaml file.

    1. Run the following command:

      $ ./openshift-install create install-config --dir=<installation_directory> (1)
      1 For <installation_directory>, specify the directory name to store the files that the installation program creates.

      Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.

    2. At the prompts, provide the configuration details for your cloud:

      1. Optional: Select an SSH key to use to access your cluster machines.

        For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery on, specify an SSH key that your ssh-agent process uses.

      2. Select aws as the platform to target.

      3. If you do not have an AWS profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program.

      4. Select the AWS region to deploy the cluster to.

      5. Select the base domain for the Route53 service that you configured for your cluster.

      6. Enter a descriptive name for your cluster.

      7. Paste the pull secret that you obtained from the Pull Secret page on the Red Hat OpenShift Cluster Manager site.

  2. Edit the install-config.yaml file to set the number of compute, or worker, replicas to 0, as shown in the following compute stanza:

    compute:
    - hyperthreading: Enabled
      name: worker
      platform: {}
      replicas: 0
  3. Optional: Back up the install-config.yaml file.

    The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now.

Configuring the cluster-wide proxy during installation

Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file.

Prerequisites
  • An existing install-config.yaml file.

  • Review the sites that your cluster requires access to and determine whether any need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. Add sites to the Proxy object’s spec.noProxy field to bypass the proxy if necessary.

    The Proxy object’s status.noProxy field is populated by default with the instance metadata endpoint (169.254.169.254) and with the values of the networking.machineCIDR, networking.clusterNetwork.cidr, and networking.serviceNetwork fields from your installation configuration.

Procedure
  1. Edit your install-config.yaml file and add the proxy settings. For example:

    apiVersion: v1
    baseDomain: my.domain.com
    proxy:
      httpProxy: http://<username>:<pswd>@<ip>:<port> (1)
      httpsProxy: http://<username>:<pswd>@<ip>:<port> (2)
      noProxy: example.com (3)
    additionalTrustBundle: | (4)
        -----BEGIN CERTIFICATE-----
        <MY_TRUSTED_CA_CERT>
        -----END CERTIFICATE-----
    ...
    1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http.
    2 A proxy URL to use for creating HTTPS connections outside the cluster. If this field is not specified, then httpProxy is used for both HTTP and HTTPS connections. The URL scheme must be http; https is currently not supported.
    3 A comma-separated list of destination domain names, domains, IP addresses, or other network CIDRs to exclude proxying. Preface a domain with . to include all subdomains of that domain. Use * to bypass proxy for all destinations.
    4 If provided, the installation program generates a ConfigMap that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle ConfigMap that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this ConfigMap is referenced in the Proxy object’s trustedCA field. The additionalTrustBundle field is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.

    The installation program does not support the proxy readinessEndpoints field.

  2. Save the file and reference it when installing OpenShift Container Platform.

The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec.

Only the Proxy object named cluster is supported, and no additional proxies can be created.

Creating the Kubernetes manifest and Ignition config files

Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to make its machines.

The Ignition config files that the installation program generates contain certificates that expire after 24 hours. You must complete your cluster installation and keep the cluster running for 24 hours in a non-degraded state to ensure that the first certificate rotation has finished.

Prerequisites
  • Obtain the OpenShift Container Platform installation program.

  • Create the install-config.yaml installation configuration file.

Procedure
  1. Generate the Kubernetes manifests for the cluster:

    $ ./openshift-install create manifests --dir=<installation_directory> (1)
    
    WARNING There are no compute nodes specified. The cluster will not fully initialize without compute nodes.
    INFO Consuming "Install Config" from target directory
    1 For <installation_directory>, specify the installation directory that contains the install-config.yaml file you created.

    Because you create your own compute machines later in the installation process, you can safely ignore this warning.

  2. Remove the Kubernetes manifest files that define the control plane machines:

    $ rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml

    By removing these files, you prevent the cluster from automatically generating control plane machines.

  3. Remove the Kubernetes manifest files that define the worker machines:

    $ rm -f openshift/99_openshift-cluster-api_worker-machineset-*.yaml

    Because you create and manage the worker machines yourself, you do not need to initialize these machines.

  4. Modify the manifests/cluster-scheduler-02-config.yml Kubernetes manifest file to prevent Pods from being scheduled on the control plane machines:

    1. Open the manifests/cluster-scheduler-02-config.yml file.

    2. Locate the mastersSchedulable parameter and set its value to False.

    3. Save and exit the file.

    Currently, due to a Kubernetes limitation, router Pods running on control plane machines will not be reachable by the ingress load balancer. This step might not be required in a future minor version of OpenShift Container Platform.

  5. Optional: If you do not want the Ingress Operator to create DNS records on your behalf, remove the privateZone and publicZone sections from the manifests/cluster-dns-02-config.yml DNS configuration file:

    apiVersion: config.openshift.io/v1
    kind: DNS
    metadata:
      creationTimestamp: null
      name: cluster
    spec:
      baseDomain: example.openshift.com
      privateZone: (1)
        id: mycluster-100419-private-zone
      publicZone: (1)
        id: example.openshift.com
    status: {}
    1 Remove these sections completely.

    If you do so, you must add ingress DNS records manually in a later step.

  6. Obtain the Ignition config files:

    $ ./openshift-install create ignition-configs --dir=<installation_directory> (1)
    1 For <installation_directory>, specify the same installation directory.

    The following files are generated in the directory:

    .
    ├── auth
    │   ├── kubeadmin-password
    │   └── kubeconfig
    ├── bootstrap.ign
    ├── master.ign
    ├── metadata.json
    └── worker.ign

Extracting the infrastructure name

The Ignition configs contain a unique cluster identifier that you can use to uniquely identify your cluster in Amazon Web Services (AWS). The provided CloudFormation templates contain references to this infrastructure name, so you must extract it.

Prerequisites
  • Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.

  • Generate the Ignition config files for your cluster.

  • Install the jq package.

Procedure
  • To extract and view the infrastructure name from the Ignition config file metadata, run the following command:

    $ jq -r .infraID /<installation_directory>/metadata.json (1)
    openshift-vw9j6 (2)
    1 For <installation_directory>, specify the path to the directory that you stored the installation files in.
    2 The output of this command is your cluster name and a random string.

Creating a VPC in AWS

You must create a VPC in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to use. You can customize the VPC to meet your requirements, including VPN and route tables. The easiest way to create the VPC is to modify the provided CloudFormation template.

If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.

Prerequisites
  • Configure an AWS account.

  • Generate the Ignition config files for your cluster.

Procedure
  1. Create a JSON file that contains the parameter values that the template requires:

    [
      {
        "ParameterKey": "VpcCidr", (1)
        "ParameterValue": "10.0.0.0/16" (2)
      },
      {
        "ParameterKey": "AvailabilityZoneCount", (3)
        "ParameterValue": "1" (4)
      },
      {
        "ParameterKey": "SubnetBits", (5)
        "ParameterValue": "12" (6)
      }
    ]
    1 The CIDR block for the VPC.
    2 Specify a CIDR block in the format x.x.x.x/16-24.
    3 The number of availability zones to deploy the VPC in.
    4 Specify an integer between 1 and 3.
    5 The size of each subnet in each availability zone.
    6 Specify an integer between 5 and 13, where 5 is /27 and 13 is /19.
  2. Copy the template from the CloudFormation template for the VPC section of this topic and save it as a YAML file on your computer. This template describes the VPC that your cluster requires.

  3. Launch the template:

    You must enter the command on a single line.

    $ aws cloudformation create-stack --stack-name <name> (1)
         --template-body file://<template>.yaml (2)
         --parameters file://<parameters>.json (3)
    1 <name> is the name for the CloudFormation stack, such as cluster-vpc. You need the name of this stack if you remove the cluster.
    2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved.
    3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file.
  4. Confirm that the template components exist:

    $ aws cloudformation describe-stacks --stack-name <name>

    After the StackStatus displays CREATE_COMPLETE, the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster:

    VpcId

    The ID of your VPC.

    PublicSubnetIds

    The IDs of the new public subnets.

    PrivateSubnetIds

    The IDs of the new private subnets.

CloudFormation template for the VPC

You can use the following CloudFormation template to deploy the VPC that you need for your OpenShift Container Platform cluster.

AWSTemplateFormatVersion: 2010-09-09
Description: Template for Best Practice VPC with 1-3 AZs

Parameters:
  VpcCidr:
    AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))$
    ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24.
    Default: 10.0.0.0/16
    Description: CIDR block for VPC.
    Type: String
  AvailabilityZoneCount:
    ConstraintDescription: "The number of availability zones. (Min: 1, Max: 3)"
    MinValue: 1
    MaxValue: 3
    Default: 1
    Description: "How many AZs to create VPC subnets for. (Min: 1, Max: 3)"
    Type: Number
  SubnetBits:
    ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27.
    MinValue: 5
    MaxValue: 13
    Default: 12
    Description: "Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)"
    Type: Number

Metadata:
  AWS::CloudFormation::Interface:
    ParameterGroups:
    - Label:
        default: "Network Configuration"
      Parameters:
      - VpcCidr
      - SubnetBits
    - Label:
        default: "Availability Zones"
      Parameters:
      - AvailabilityZoneCount
    ParameterLabels:
      AvailabilityZoneCount:
        default: "Availability Zone Count"
      VpcCidr:
        default: "VPC CIDR"
      SubnetBits:
        default: "Bits Per Subnet"

Conditions:
  DoAz3: !Equals [3, !Ref AvailabilityZoneCount]
  DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3]

Resources:
  VPC:
    Type: "AWS::EC2::VPC"
    Properties:
      EnableDnsSupport: "true"
      EnableDnsHostnames: "true"
      CidrBlock: !Ref VpcCidr
  PublicSubnet:
    Type: "AWS::EC2::Subnet"
    Properties:
      VpcId: !Ref VPC
      CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]]
      AvailabilityZone: !Select
      - 0
      - Fn::GetAZs: !Ref "AWS::Region"
  PublicSubnet2:
    Type: "AWS::EC2::Subnet"
    Condition: DoAz2
    Properties:
      VpcId: !Ref VPC
      CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]]
      AvailabilityZone: !Select
      - 1
      - Fn::GetAZs: !Ref "AWS::Region"
  PublicSubnet3:
    Type: "AWS::EC2::Subnet"
    Condition: DoAz3
    Properties:
      VpcId: !Ref VPC
      CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]]
      AvailabilityZone: !Select
      - 2
      - Fn::GetAZs: !Ref "AWS::Region"
  InternetGateway:
    Type: "AWS::EC2::InternetGateway"
  GatewayToInternet:
    Type: "AWS::EC2::VPCGatewayAttachment"
    Properties:
      VpcId: !Ref VPC
      InternetGatewayId: !Ref InternetGateway
  PublicRouteTable:
    Type: "AWS::EC2::RouteTable"
    Properties:
      VpcId: !Ref VPC
  PublicRoute:
    Type: "AWS::EC2::Route"
    DependsOn: GatewayToInternet
    Properties:
      RouteTableId: !Ref PublicRouteTable
      DestinationCidrBlock: 0.0.0.0/0
      GatewayId: !Ref InternetGateway
  PublicSubnetRouteTableAssociation:
    Type: "AWS::EC2::SubnetRouteTableAssociation"
    Properties:
      SubnetId: !Ref PublicSubnet
      RouteTableId: !Ref PublicRouteTable
  PublicSubnetRouteTableAssociation2:
    Type: "AWS::EC2::SubnetRouteTableAssociation"
    Condition: DoAz2
    Properties:
      SubnetId: !Ref PublicSubnet2
      RouteTableId: !Ref PublicRouteTable
  PublicSubnetRouteTableAssociation3:
    Condition: DoAz3
    Type: "AWS::EC2::SubnetRouteTableAssociation"
    Properties:
      SubnetId: !Ref PublicSubnet3
      RouteTableId: !Ref PublicRouteTable
  PrivateSubnet:
    Type: "AWS::EC2::Subnet"
    Properties:
      VpcId: !Ref VPC
      CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]]
      AvailabilityZone: !Select
      - 0
      - Fn::GetAZs: !Ref "AWS::Region"
  PrivateRouteTable:
    Type: "AWS::EC2::RouteTable"
    Properties:
      VpcId: !Ref VPC
  PrivateSubnetRouteTableAssociation:
    Type: "AWS::EC2::SubnetRouteTableAssociation"
    Properties:
      SubnetId: !Ref PrivateSubnet
      RouteTableId: !Ref PrivateRouteTable
  NAT:
    DependsOn:
    - GatewayToInternet
    Type: "AWS::EC2::NatGateway"
    Properties:
      AllocationId:
        "Fn::GetAtt":
        - EIP
        - AllocationId
      SubnetId: !Ref PublicSubnet
  EIP:
    Type: "AWS::EC2::EIP"
    Properties:
      Domain: vpc
  Route:
    Type: "AWS::EC2::Route"
    Properties:
      RouteTableId:
        Ref: PrivateRouteTable
      DestinationCidrBlock: 0.0.0.0/0
      NatGatewayId:
        Ref: NAT
  PrivateSubnet2:
    Type: "AWS::EC2::Subnet"
    Condition: DoAz2
    Properties:
      VpcId: !Ref VPC
      CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]]
      AvailabilityZone: !Select
      - 1
      - Fn::GetAZs: !Ref "AWS::Region"
  PrivateRouteTable2:
    Type: "AWS::EC2::RouteTable"
    Condition: DoAz2
    Properties:
      VpcId: !Ref VPC
  PrivateSubnetRouteTableAssociation2:
    Type: "AWS::EC2::SubnetRouteTableAssociation"
    Condition: DoAz2
    Properties:
      SubnetId: !Ref PrivateSubnet2
      RouteTableId: !Ref PrivateRouteTable2
  NAT2:
    DependsOn:
    - GatewayToInternet
    Type: "AWS::EC2::NatGateway"
    Condition: DoAz2
    Properties:
      AllocationId:
        "Fn::GetAtt":
        - EIP2
        - AllocationId
      SubnetId: !Ref PublicSubnet2
  EIP2:
    Type: "AWS::EC2::EIP"
    Condition: DoAz2
    Properties:
      Domain: vpc
  Route2:
    Type: "AWS::EC2::Route"
    Condition: DoAz2
    Properties:
      RouteTableId:
        Ref: PrivateRouteTable2
      DestinationCidrBlock: 0.0.0.0/0
      NatGatewayId:
        Ref: NAT2
  PrivateSubnet3:
    Type: "AWS::EC2::Subnet"
    Condition: DoAz3
    Properties:
      VpcId: !Ref VPC
      CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]]
      AvailabilityZone: !Select
      - 2
      - Fn::GetAZs: !Ref "AWS::Region"
  PrivateRouteTable3:
    Type: "AWS::EC2::RouteTable"
    Condition: DoAz3
    Properties:
      VpcId: !Ref VPC
  PrivateSubnetRouteTableAssociation3:
    Type: "AWS::EC2::SubnetRouteTableAssociation"
    Condition: DoAz3
    Properties:
      SubnetId: !Ref PrivateSubnet3
      RouteTableId: !Ref PrivateRouteTable3
  NAT3:
    DependsOn:
    - GatewayToInternet
    Type: "AWS::EC2::NatGateway"
    Condition: DoAz3
    Properties:
      AllocationId:
        "Fn::GetAtt":
        - EIP3
        - AllocationId
      SubnetId: !Ref PublicSubnet3
  EIP3:
    Type: "AWS::EC2::EIP"
    Condition: DoAz3
    Properties:
      Domain: vpc
  Route3:
    Type: "AWS::EC2::Route"
    Condition: DoAz3
    Properties:
      RouteTableId:
        Ref: PrivateRouteTable3
      DestinationCidrBlock: 0.0.0.0/0
      NatGatewayId:
        Ref: NAT3
  S3Endpoint:
    Type: AWS::EC2::VPCEndpoint
    Properties:
      PolicyDocument:
        Version: 2012-10-17
        Statement:
        - Effect: Allow
          Principal: '*'
          Action:
          - '*'
          Resource:
          - '*'
      RouteTableIds:
      - !Ref PublicRouteTable
      - !Ref PrivateRouteTable
      - !If [DoAz2, !Ref PrivateRouteTable2, !Ref "AWS::NoValue"]
      - !If [DoAz3, !Ref PrivateRouteTable3, !Ref "AWS::NoValue"]
      ServiceName: !Join
      - ''
      - - com.amazonaws.
        - !Ref 'AWS::Region'
        - .s3
      VpcId: !Ref VPC

Outputs:
  VpcId:
    Description: ID of the new VPC.
    Value: !Ref VPC
  PublicSubnetIds:
    Description: Subnet IDs of the public subnets.
    Value:
      !Join [
        ",",
        [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PublicSubnet3, !Ref "AWS::NoValue"]]
      ]
  PrivateSubnetIds:
    Description: Subnet IDs of the private subnets.
    Value:
      !Join [
        ",",
        [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PrivateSubnet3, !Ref "AWS::NoValue"]]
      ]

Creating networking and load balancing components in AWS

You must configure networking and load balancing in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to use. The easiest way to create these components is to modify the provided CloudFormation template, which also creates a hosted zone and subnet tags.

You can run the template multiple times within a single VPC.

If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.

Prerequisites
  • Configure an AWS account.

  • Generate the Ignition config files for your cluster.

  • Create and configure a VPC and associated subnets in AWS.

Procedure
  1. Obtain the Hosted Zone ID for the Route53 zone that you specified in the install-config.yaml file for your cluster. You can obtain this ID from the AWS console or by running the following command:

    You must enter the command on a single line.

    $ aws route53 list-hosted-zones-by-name |
         jq --arg name "<route53_domain>." \ (1)
         -r '.HostedZones | .[] | select(.Name=="\($name)") | .Id'
    1 For the <route53_domain>, specify the Route53 base domain that you used when you generated the install-config.yaml file for the cluster.
  2. Create a JSON file that contains the parameter values that the template requires:

    [
      {
        "ParameterKey": "ClusterName", (1)
        "ParameterValue": "mycluster" (2)
      },
      {
        "ParameterKey": "InfrastructureName", (3)
        "ParameterValue": "mycluster-<random_string>" (4)
      },
      {
        "ParameterKey": "HostedZoneId", (5)
        "ParameterValue": "<random_string>" (6)
      },
      {
        "ParameterKey": "HostedZoneName", (7)
        "ParameterValue": "example.com" (8)
      },
      {
        "ParameterKey": "PublicSubnets", (9)
        "ParameterValue": "subnet-<random_string>" (10)
      },
      {
        "ParameterKey": "PrivateSubnets", (11)
        "ParameterValue": "subnet-<random_string>" (12)
      },
      {
        "ParameterKey": "VpcId", (13)
        "ParameterValue": "vpc-<random_string>" (14)
      }
    ]
    1 A short, representative cluster name to use for host names, etc.
    2 Specify the cluster name that you used when you generated the install-config.yaml file for the cluster.
    3 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster.
    4 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string>.
    5 The Route53 public zone ID to register the targets with.
    6 Specify the Route53 public zone ID, which as a format similar to Z21IXYZABCZ2A4. You can obtain this value from the AWS console.
    7 The Route53 zone to register the targets with.
    8 Specify the Route53 base domain that you used when you generated the install-config.yaml file for the cluster. Do not include the trailing period (.) that is displayed in the AWS console.
    9 The public subnets that you created for your VPC.
    10 Specify the PublicSubnetIds value from the output of the CloudFormation template for the VPC.
    11 The private subnets that you created for your VPC.
    12 Specify the PrivateSubnetIds value from the output of the CloudFormation template for the VPC.
    13 The VPC that you created for the cluster.
    14 Specify the VpcId value from the output of the CloudFormation template for the VPC.
  3. Copy the template from the CloudFormation template for the network and load balancers section of this topic and save it as a YAML file on your computer. This template describes the networking and load balancing objects that your cluster requires.

  4. Launch the template:

    You must enter the command on a single line.

    $ aws cloudformation create-stack --stack-name <name> (1)
         --template-body file://<template>.yaml (2)
         --parameters file://<parameters>.json (3)
         --capabilities CAPABILITY_NAMED_IAM
    1 <name> is the name for the CloudFormation stack, such as cluster-dns. You need the name of this stack if you remove the cluster.
    2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved.
    3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file.
  5. Confirm that the template components exist:

    $ aws cloudformation describe-stacks --stack-name <name>

    After the StackStatus displays CREATE_COMPLETE, the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster:

    PrivateHostedZoneId

    Hosted zone ID for the private DNS.

    ExternalApiLoadBalancerName

    Full name of the external API load balancer.

    InternalApiLoadBalancerName

    Full name of the internal API load balancer.

    ApiServerDnsName

    Full host name of the API server.

    RegisterNlbIpTargetsLambda

    Lambda ARN useful to help register/deregister IP targets for these load balancers.

    ExternalApiTargetGroupArn

    ARN of external API target group.

    InternalApiTargetGroupArn

    ARN of internal API target group.

    InternalServiceTargetGroupArn

    ARN of internal service target group.

CloudFormation template for the network and load balancers

You can use the following CloudFormation template to deploy the networking objects and load balancers that you need for your OpenShift Container Platform cluster.

AWSTemplateFormatVersion: 2010-09-09
Description: Template for OpenShift Cluster Network Elements (Route53 & LBs)

Parameters:
  ClusterName:
    AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})$
    MaxLength: 27
    MinLength: 1
    ConstraintDescription: Cluster name must be alphanumeric, start with a letter, and have a maximum of 27 characters.
    Description: A short, representative cluster name to use for host names and other identifying names.
    Type: String
  InfrastructureName:
    AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})$
    MaxLength: 27
    MinLength: 1
    ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters.
    Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster.
    Type: String
  HostedZoneId:
    Description: The Route53 public zone ID to register the targets with, such as Z21IXYZABCZ2A4.
    Type: String
  HostedZoneName:
    Description: The Route53 zone to register the targets with, such as example.com. Omit the trailing period.
    Type: String
    Default: "example.com"
  PublicSubnets:
    Description: The internet-facing subnets.
    Type: List<AWS::EC2::Subnet::Id>
  PrivateSubnets:
    Description: The internal subnets.
    Type: List<AWS::EC2::Subnet::Id>
  VpcId:
    Description: The VPC-scoped resources will belong to this VPC.
    Type: AWS::EC2::VPC::Id

Metadata:
  AWS::CloudFormation::Interface:
    ParameterGroups:
    - Label:
        default: "Cluster Information"
      Parameters:
      - ClusterName
      - InfrastructureName
    - Label:
        default: "Network Configuration"
      Parameters:
      - VpcId
      - PublicSubnets
      - PrivateSubnets
    - Label:
        default: "DNS"
      Parameters:
      - HostedZoneName
      - HostedZoneId
    ParameterLabels:
      ClusterName:
        default: "Cluster Name"
      InfrastructureName:
        default: "Infrastructure Name"
      VpcId:
        default: "VPC ID"
      PublicSubnets:
        default: "Public Subnets"
      PrivateSubnets:
        default: "Private Subnets"
      HostedZoneName:
        default: "Public Hosted Zone Name"
      HostedZoneId:
        default: "Public Hosted Zone ID"

Resources:
  ExtApiElb:
    Type: AWS::ElasticLoadBalancingV2::LoadBalancer
    Properties:
      Name: !Join ["-", [!Ref InfrastructureName, "ext"]]
      IpAddressType: ipv4
      Subnets: !Ref PublicSubnets
      Type: network

  IntApiElb:
    Type: AWS::ElasticLoadBalancingV2::LoadBalancer
    Properties:
      Name: !Join ["-", [!Ref InfrastructureName, "int"]]
      Scheme: internal
      IpAddressType: ipv4
      Subnets: !Ref PrivateSubnets
      Type: network

  IntDns:
    Type: "AWS::Route53::HostedZone"
    Properties:
      HostedZoneConfig:
        Comment: "Managed by CloudFormation"
      Name: !Join [".", [!Ref ClusterName, !Ref HostedZoneName]]
      HostedZoneTags:
      - Key: Name
        Value: !Join ["-", [!Ref InfrastructureName, "int"]]
      - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]]
        Value: "owned"
      VPCs:
      - VPCId: !Ref VpcId
        VPCRegion: !Ref "AWS::Region"

  ExternalApiServerRecord:
    Type: AWS::Route53::RecordSetGroup
    Properties:
      Comment: Alias record for the API server
      HostedZoneId: !Ref HostedZoneId
      RecordSets:
      - Name:
          !Join [
            ".",
            ["api", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]],
          ]
        Type: A
        AliasTarget:
          HostedZoneId: !GetAtt ExtApiElb.CanonicalHostedZoneID
          DNSName: !GetAtt ExtApiElb.DNSName

  InternalApiServerRecord:
    Type: AWS::Route53::RecordSetGroup
    Properties:
      Comment: Alias record for the API server
      HostedZoneId: !Ref IntDns
      RecordSets:
      - Name:
          !Join [
            ".",
            ["api", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]],
          ]
        Type: A
        AliasTarget:
          HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID
          DNSName: !GetAtt IntApiElb.DNSName
      - Name:
          !Join [
            ".",
            ["api-int", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]],
          ]
        Type: A
        AliasTarget:
          HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID
          DNSName: !GetAtt IntApiElb.DNSName

  ExternalApiListener:
    Type: AWS::ElasticLoadBalancingV2::Listener
    Properties:
      DefaultActions:
      - Type: forward
        TargetGroupArn:
          Ref: ExternalApiTargetGroup
      LoadBalancerArn:
        Ref: ExtApiElb
      Port: 6443
      Protocol: TCP

  ExternalApiTargetGroup:
    Type: AWS::ElasticLoadBalancingV2::TargetGroup
    Properties:
      Port: 6443
      Protocol: TCP
      TargetType: ip
      VpcId:
        Ref: VpcId
      TargetGroupAttributes:
      - Key: deregistration_delay.timeout_seconds
        Value: 60

  InternalApiListener:
    Type: AWS::ElasticLoadBalancingV2::Listener
    Properties:
      DefaultActions:
      - Type: forward
        TargetGroupArn:
          Ref: InternalApiTargetGroup
      LoadBalancerArn:
        Ref: IntApiElb
      Port: 6443
      Protocol: TCP

  InternalApiTargetGroup:
    Type: AWS::ElasticLoadBalancingV2::TargetGroup
    Properties:
      Port: 6443
      Protocol: TCP
      TargetType: ip
      VpcId:
        Ref: VpcId
      TargetGroupAttributes:
      - Key: deregistration_delay.timeout_seconds
        Value: 60

  InternalServiceInternalListener:
    Type: AWS::ElasticLoadBalancingV2::Listener
    Properties:
      DefaultActions:
      - Type: forward
        TargetGroupArn:
          Ref: InternalServiceTargetGroup
      LoadBalancerArn:
        Ref: IntApiElb
      Port: 22623
      Protocol: TCP

  InternalServiceTargetGroup:
    Type: AWS::ElasticLoadBalancingV2::TargetGroup
    Properties:
      Port: 22623
      Protocol: TCP
      TargetType: ip
      VpcId:
        Ref: VpcId
      TargetGroupAttributes:
      - Key: deregistration_delay.timeout_seconds
        Value: 60

  RegisterTargetLambdaIamRole:
    Type: AWS::IAM::Role
    Properties:
      RoleName: !Join ["-", [!Ref InfrastructureName, "nlb", "lambda", "role"]]
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
        - Effect: "Allow"
          Principal:
            Service:
            - "lambda.amazonaws.com"
          Action:
          - "sts:AssumeRole"
      Path: "/"
      Policies:
      - PolicyName: !Join ["-", [!Ref InfrastructureName, "master", "policy"]]
        PolicyDocument:
          Version: "2012-10-17"
          Statement:
          - Effect: "Allow"
            Action:
              [
                "elasticloadbalancing:RegisterTargets",
                "elasticloadbalancing:DeregisterTargets",
              ]
            Resource: !Ref InternalApiTargetGroup
          - Effect: "Allow"
            Action:
              [
                "elasticloadbalancing:RegisterTargets",
                "elasticloadbalancing:DeregisterTargets",
              ]
            Resource: !Ref InternalServiceTargetGroup
          - Effect: "Allow"
            Action:
              [
                "elasticloadbalancing:RegisterTargets",
                "elasticloadbalancing:DeregisterTargets",
              ]
            Resource: !Ref ExternalApiTargetGroup

  RegisterNlbIpTargets:
    Type: "AWS::Lambda::Function"
    Properties:
      Handler: "index.handler"
      Role:
        Fn::GetAtt:
        - "RegisterTargetLambdaIamRole"
        - "Arn"
      Code:
        ZipFile: |
          import json
          import boto3
          import cfnresponse
          def handler(event, context):
            elb = boto3.client('elbv2')
            if event['RequestType'] == 'Delete':
              elb.deregister_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}])
            elif event['RequestType'] == 'Create':
              elb.register_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}])
            responseData = {}
            cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['TargetArn']+event['ResourceProperties']['TargetIp'])
      Runtime: "python3.7"
      Timeout: 120

  RegisterSubnetTagsLambdaIamRole:
    Type: AWS::IAM::Role
    Properties:
      RoleName: !Join ["-", [!Ref InfrastructureName, "subnet-tags-lambda-role"]]
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
        - Effect: "Allow"
          Principal:
            Service:
            - "lambda.amazonaws.com"
          Action:
          - "sts:AssumeRole"
      Path: "/"
      Policies:
      - PolicyName: !Join ["-", [!Ref InfrastructureName, "subnet-tagging-policy"]]
        PolicyDocument:
          Version: "2012-10-17"
          Statement:
          - Effect: "Allow"
            Action:
              [
                "ec2:DeleteTags",
                "ec2:CreateTags"
              ]
            Resource: "arn:aws:ec2:*:*:subnet/*"
          - Effect: "Allow"
            Action:
              [
                "ec2:DescribeSubnets",
                "ec2:DescribeTags"
              ]
            Resource: "*"

  RegisterSubnetTags:
    Type: "AWS::Lambda::Function"
    Properties:
      Handler: "index.handler"
      Role:
        Fn::GetAtt:
        - "RegisterSubnetTagsLambdaIamRole"
        - "Arn"
      Code:
        ZipFile: |
          import json
          import boto3
          import cfnresponse
          def handler(event, context):
            ec2_client = boto3.client('ec2')
            if event['RequestType'] == 'Delete':
              for subnet_id in event['ResourceProperties']['Subnets']:
                ec2_client.delete_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName']}]);
            elif event['RequestType'] == 'Create':
              for subnet_id in event['ResourceProperties']['Subnets']:
                ec2_client.create_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName'], 'Value': 'shared'}]);
            responseData = {}
            cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['InfrastructureName']+event['ResourceProperties']['Subnets'][0])
      Runtime: "python3.7"
      Timeout: 120

  RegisterPublicSubnetTags:
    Type: Custom::SubnetRegister
    Properties:
      ServiceToken: !GetAtt RegisterSubnetTags.Arn
      InfrastructureName: !Ref InfrastructureName
      Subnets: !Ref PublicSubnets

  RegisterPrivateSubnetTags:
    Type: Custom::SubnetRegister
    Properties:
      ServiceToken: !GetAtt RegisterSubnetTags.Arn
      InfrastructureName: !Ref InfrastructureName
      Subnets: !Ref PrivateSubnets

Outputs:
  PrivateHostedZoneId:
    Description: Hosted zone ID for the private DNS, which is required for private records.
    Value: !Ref IntDns
  ExternalApiLoadBalancerName:
    Description: Full name of the External API load balancer created.
    Value: !GetAtt ExtApiElb.LoadBalancerFullName
  InternalApiLoadBalancerName:
    Description: Full name of the Internal API load balancer created.
    Value: !GetAtt IntApiElb.LoadBalancerFullName
  ApiServerDnsName:
    Description: Full hostname of the API server, which is required for the Ignition config files.
    Value: !Join [".", ["api-int", !Ref ClusterName, !Ref HostedZoneName]]
  RegisterNlbIpTargetsLambda:
    Description: Lambda ARN useful to help register or deregister IP targets for these load balancers.
    Value: !GetAtt RegisterNlbIpTargets.Arn
  ExternalApiTargetGroupArn:
    Description: ARN of External API target group.
    Value: !Ref ExternalApiTargetGroup
  InternalApiTargetGroupArn:
    Description: ARN of Internal API target group.
    Value: !Ref InternalApiTargetGroup
  InternalServiceTargetGroupArn:
    Description: ARN of internal service target group.
    Value: !Ref InternalServiceTargetGroup

Creating security group and roles in AWS

You must create security groups and roles in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to use. The easiest way to create these components is to modify the provided CloudFormation template.

If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.

Prerequisites
  • Configure an AWS account.

  • Generate the Ignition config files for your cluster.

  • Create and configure a VPC and associated subnets in AWS.

Procedure
  1. Create a JSON file that contains the parameter values that the template requires:

    [
      {
        "ParameterKey": "InfrastructureName", (1)
        "ParameterValue": "mycluster-<random_string>" (2)
      },
      {
        "ParameterKey": "VpcCidr", (3)
        "ParameterValue": "10.0.0.0/16" (4)
      },
      {
        "ParameterKey": "PrivateSubnets", (5)
        "ParameterValue": "subnet-<random_string>" (6)
      },
      {
        "ParameterKey": "VpcId", (7)
        "ParameterValue": "vpc-<random_string>" (8)
      }
    ]
    1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster.
    2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string>.
    3 The CIDR block for the VPC.
    4 Specify the CIDR block parameter that you used for the VPC that you defined in the form x.x.x.x/16-24.
    5 The private subnets that you created for your VPC.
    6 Specify the PrivateSubnetIds value from the output of the CloudFormation template for the VPC.
    7 The VPC that you created for the cluster.
    8 Specify the VpcId value from the output of the CloudFormation template for the VPC.
  2. Copy the template from the CloudFormation template for security objects section of this topic and save it as a YAML file on your computer. This template describes the security groups and roles that your cluster requires.

  3. Launch the template:

    You must enter the command on a single line.

    $ aws cloudformation create-stack --stack-name <name> (1)
         --template-body file://<template>.yaml (2)
         --parameters file://<parameters>.json (3)
         --capabilities CAPABILITY_NAMED_IAM
    1 <name> is the name for the CloudFormation stack, such as cluster-sec. You need the name of this stack if you remove the cluster.
    2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved.
    3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file.
  4. Confirm that the template components exist:

    $ aws cloudformation describe-stacks --stack-name <name>

    After the StackStatus displays CREATE_COMPLETE, the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster:

    MasterSecurityGroupId

    Master Security Group ID

    WorkerSecurityGroupId

    Worker Security Group ID

    MasterInstanceProfile

    Master IAM Instance Profile

    WorkerInstanceProfile

    Worker IAM Instance Profile

CloudFormation template for security objects

You can use the following CloudFormation template to deploy the security objects that you need for your OpenShift Container Platform cluster.

AWSTemplateFormatVersion: 2010-09-09
Description: Template for OpenShift Cluster Security Elements (Security Groups & IAM)

Parameters:
  InfrastructureName:
    AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})$
    MaxLength: 27
    MinLength: 1
    ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters.
    Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster.
    Type: String
  VpcCidr:
    AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))$
    ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24.
    Default: 10.0.0.0/16
    Description: CIDR block for VPC.
    Type: String
  VpcId:
    Description: The VPC-scoped resources will belong to this VPC.
    Type: AWS::EC2::VPC::Id
  PrivateSubnets:
    Description: The internal subnets.
    Type: List<AWS::EC2::Subnet::Id>

Metadata:
  AWS::CloudFormation::Interface:
    ParameterGroups:
    - Label:
        default: "Cluster Information"
      Parameters:
      - InfrastructureName
    - Label:
        default: "Network Configuration"
      Parameters:
      - VpcId
      - VpcCidr
      - PrivateSubnets
    ParameterLabels:
      InfrastructureName:
        default: "Infrastructure Name"
      VpcId:
        default: "VPC ID"
      VpcCidr:
        default: "VPC CIDR"
      PrivateSubnets:
        default: "Private Subnets"

Resources:
  MasterSecurityGroup:
    Type: AWS::EC2::SecurityGroup
    Properties:
      GroupDescription: Cluster Master Security Group
      SecurityGroupIngress:
      - IpProtocol: icmp
        FromPort: 0
        ToPort: 0
        CidrIp: !Ref VpcCidr
      - IpProtocol: tcp
        FromPort: 22
        ToPort: 22
        CidrIp: !Ref VpcCidr
      - IpProtocol: tcp
        ToPort: 6443
        FromPort: 6443
        CidrIp: !Ref VpcCidr
      - IpProtocol: tcp
        FromPort: 22623
        ToPort: 22623
        CidrIp: !Ref VpcCidr
      VpcId: !Ref VpcId

  WorkerSecurityGroup:
    Type: AWS::EC2::SecurityGroup
    Properties:
      GroupDescription: Cluster Worker Security Group
      SecurityGroupIngress:
      - IpProtocol: icmp
        FromPort: 0
        ToPort: 0
        CidrIp: !Ref VpcCidr
      - IpProtocol: tcp
        FromPort: 22
        ToPort: 22
        CidrIp: !Ref VpcCidr
      VpcId: !Ref VpcId

  MasterIngressEtcd:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt MasterSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
      Description: etcd
      FromPort: 2379
      ToPort: 2380
      IpProtocol: tcp

  MasterIngressVxlan:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt MasterSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
      Description: Vxlan packets
      FromPort: 4789
      ToPort: 4789
      IpProtocol: udp

  MasterIngressWorkerVxlan:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt MasterSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
      Description: Vxlan packets
      FromPort: 4789
      ToPort: 4789
      IpProtocol: udp

  MasterIngressInternal:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt MasterSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
      Description: Internal cluster communication
      FromPort: 9000
      ToPort: 9999
      IpProtocol: tcp

  MasterIngressWorkerInternal:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt MasterSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
      Description: Internal cluster communication
      FromPort: 9000
      ToPort: 9999
      IpProtocol: tcp

  MasterIngressKube:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt MasterSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
      Description: Kubernetes kubelet, scheduler and controller manager
      FromPort: 10250
      ToPort: 10259
      IpProtocol: tcp

  MasterIngressWorkerKube:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt MasterSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
      Description: Kubernetes kubelet, scheduler and controller manager
      FromPort: 10250
      ToPort: 10259
      IpProtocol: tcp

  MasterIngressIngressServices:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt MasterSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
      Description: Kubernetes ingress services
      FromPort: 30000
      ToPort: 32767
      IpProtocol: tcp

  MasterIngressWorkerIngressServices:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt MasterSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
      Description: Kubernetes ingress services
      FromPort: 30000
      ToPort: 32767
      IpProtocol: tcp

  WorkerIngressVxlan:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt WorkerSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
      Description: Vxlan packets
      FromPort: 4789
      ToPort: 4789
      IpProtocol: udp

  WorkerIngressWorkerVxlan:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt WorkerSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
      Description: Vxlan packets
      FromPort: 4789
      ToPort: 4789
      IpProtocol: udp

  WorkerIngressInternal:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt WorkerSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
      Description: Internal cluster communication
      FromPort: 9000
      ToPort: 9999
      IpProtocol: tcp

  WorkerIngressWorkerInternal:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt WorkerSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
      Description: Internal cluster communication
      FromPort: 9000
      ToPort: 9999
      IpProtocol: tcp

  WorkerIngressKube:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt WorkerSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
      Description: Kubernetes secure kubelet port
      FromPort: 10250
      ToPort: 10250
      IpProtocol: tcp

  WorkerIngressWorkerKube:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt WorkerSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
      Description: Internal Kubernetes communication
      FromPort: 10250
      ToPort: 10250
      IpProtocol: tcp

  WorkerIngressIngressServices:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt WorkerSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
      Description: Kubernetes ingress services
      FromPort: 30000
      ToPort: 32767
      IpProtocol: tcp

  WorkerIngressWorkerIngressServices:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt WorkerSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
      Description: Kubernetes ingress services
      FromPort: 30000
      ToPort: 32767
      IpProtocol: tcp

  MasterIamRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
        - Effect: "Allow"
          Principal:
            Service:
            - "ec2.amazonaws.com"
          Action:
          - "sts:AssumeRole"
      Policies:
      - PolicyName: !Join ["-", [!Ref InfrastructureName, "master", "policy"]]
        PolicyDocument:
          Version: "2012-10-17"
          Statement:
          - Effect: "Allow"
            Action: "ec2:*"
            Resource: "*"
          - Effect: "Allow"
            Action: "elasticloadbalancing:*"
            Resource: "*"
          - Effect: "Allow"
            Action: "iam:PassRole"
            Resource: "*"
          - Effect: "Allow"
            Action: "s3:GetObject"
            Resource: "*"

  MasterInstanceProfile:
    Type: "AWS::IAM::InstanceProfile"
    Properties:
      Roles:
      - Ref: "MasterIamRole"

  WorkerIamRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
        - Effect: "Allow"
          Principal:
            Service:
            - "ec2.amazonaws.com"
          Action:
          - "sts:AssumeRole"
      Policies:
      - PolicyName: !Join ["-", [!Ref InfrastructureName, "worker", "policy"]]
        PolicyDocument:
          Version: "2012-10-17"
          Statement:
          - Effect: "Allow"
            Action: "ec2:Describe*"
            Resource: "*"

  WorkerInstanceProfile:
    Type: "AWS::IAM::InstanceProfile"
    Properties:
      Roles:
      - Ref: "WorkerIamRole"

Outputs:
  MasterSecurityGroupId:
    Description: Master Security Group ID
    Value: !GetAtt MasterSecurityGroup.GroupId

  WorkerSecurityGroupId:
    Description: Worker Security Group ID
    Value: !GetAtt WorkerSecurityGroup.GroupId

  MasterInstanceProfile:
    Description: Master IAM Instance Profile
    Value: !Ref MasterInstanceProfile

  WorkerInstanceProfile:
    Description: Worker IAM Instance Profile
    Value: !Ref WorkerInstanceProfile

RHCOS AMIs for the AWS infrastructure

You must use a valid Red Hat Enterprise Linux CoreOS (RHCOS) AMI for your Amazon Web Services (AWS) zone for your OpenShift Container Platform nodes.

Table 2. RHCOS AMIs
AWS zone AWS AMI

ap-northeast-1

ami-0426ca3481a088c7b

ap-northeast-2

ami-014514ae47679721b

ap-south-1

ami-0bd772ba746948d9a

ap-southeast-1

ami-0d76ac0ebaac29e40

ap-southeast-2

ami-0391e92574fb09e08

ca-central-1

ami-04419691da69850cf

eu-central-1

ami-092b69120ecf915ed

eu-north-1

ami-0175e9c9d258cc11d

eu-west-1

ami-04370efd78434697b

eu-west-2

ami-00c74e593125e0096

eu-west-3

ami-058ad17da14ff4d0d

sa-east-1

ami-03f6b71e93e630dab

us-east-1

ami-01e7fdcb66157b224

us-east-2

ami-0bc59aaa7363b805d

us-west-1

ami-0ba912f53c1fdcdf0

us-west-2

ami-08e10b201e19fd5e7

Creating the bootstrap node in AWS

You must create the bootstrap node in Amazon Web Services (AWS) to use during OpenShift Container Platform cluster initialization. The easiest way to create this node is to modify the provided CloudFormation template.

If you do not use the provided CloudFormation template to create your bootstrap node, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.

Prerequisites
  • Configure an AWS account.

  • Generate the Ignition config files for your cluster.

  • Create and configure a VPC and assocated subnets in AWS.

  • Create and configure DNS, load balancers, and listeners in AWS.

  • Create control plane and compute roles.

Procedure
  1. Provide a location to serve the bootstrap.ign Ignition config file to your cluster. This file is located in your installation directory. One way to do this is to create an S3 bucket in your cluster’s region and upload the Ignition config file to it.

    The provided CloudFormation Template assumes that the Ignition config files for your cluster are served from an S3 bucket. If you choose to serve the files from another location, you must modify the templates.

    The bootstrap Ignition config file does contain secrets, like X.509 keys. The following steps provide basic security for the S3 bucket. To provide additional security, you can enable an S3 bucket policy to allow only certain users, such as the OpenShift IAM user, to access objects that the bucket contains. You can avoid S3 entirely and serve your bootstrap Ignition config file from any address that the bootstrap machine can reach.

    1. Create the bucket:

      $ aws s3 mb s3://<cluster-name>-infra (1)
      1 <cluster-name>-infra is the bucket name.
    2. Upload the bootstrap.ign Ignition config file to the bucket:

      $ aws s3 cp bootstrap.ign s3://<cluster-name>-infra/bootstrap.ign
    3. Verify that the file uploaded:

      $ aws s3 ls s3://<cluster-name>-infra/
      
      2019-04-03 16:15:16     314878 bootstrap.ign
  2. Create a JSON file that contains the parameter values that the template requires:

    [
      {
        "ParameterKey": "InfrastructureName", (1)
        "ParameterValue": "mycluster-<random_string>" (2)
      },
      {
        "ParameterKey": "RhcosAmi", (3)
        "ParameterValue": "ami-<random_string>" (4)
      },
      {
        "ParameterKey": "AllowedBootstrapSshCidr", (5)
        "ParameterValue": "0.0.0.0/0" (6)
      },
      {
        "ParameterKey": "PublicSubnet", (7)
        "ParameterValue": "subnet-<random_string>" (8)
      },
      {
        "ParameterKey": "MasterSecurityGroupId", (9)
        "ParameterValue": "sg-<random_string>" (10)
      },
      {
        "ParameterKey": "VpcId", (11)
        "ParameterValue": "vpc-<random_string>" (12)
      },
      {
        "ParameterKey": "BootstrapIgnitionLocation", (13)
        "ParameterValue": "s3://<bucket_name>/bootstrap.ign" (14)
      },
      {
        "ParameterKey": "AutoRegisterELB", (15)
        "ParameterValue": "yes" (16)
      },
      {
        "ParameterKey": "RegisterNlbIpTargetsLambdaArn", (17)
        "ParameterValue": "arn:aws:lambda:<region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>" (18)
      },
      {
        "ParameterKey": "ExternalApiTargetGroupArn", (19)
        "ParameterValue": "arn:aws:elasticloadbalancing:<region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>" (20)
      },
      {
        "ParameterKey": "InternalApiTargetGroupArn", (21)
        "ParameterValue": "arn:aws:elasticloadbalancing:<region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" (22)
      },
      {
        "ParameterKey": "InternalServiceTargetGroupArn", (23)
        "ParameterValue": "arn:aws:elasticloadbalancing:<region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" (24)
      }
    ]
    1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster.
    2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string>.
    3 Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the bootstrap node.
    4 Specify a valid AWS::EC2::Image::Id value.
    5 CIDR block to allow SSH access to the bootstrap node.
    6 Specify a CIDR block in the format x.x.x.x/16-24.
    7 The public subnet that is associated with your VPC to launch the bootstrap node into.
    8 Specify the PublicSubnetIds value from the output of the CloudFormation template for the VPC.
    9 The master security group ID (for registering temporary rules)
    10 Specify the MasterSecurityGroupId value from the output of the CloudFormation template for the security group and roles.
    11 The VPC created resources will belong to.
    12 Specify the VpcId value from the output of the CloudFormation template for the VPC.
    13 Location to fetch bootstrap Ignition config file from.
    14 Specify the S3 bucket and file name in the form s3://<bucket_name>/bootstrap.ign.
    15 Whether or not to register a network load balancer (NLB).
    16 Specify yes or no. If you specify yes, you must provide a Lambda Amazon Resource Name (ARN) value.
    17 The ARN for NLB IP target registration lambda group.
    18 Specify the RegisterNlbIpTargetsLambda value from the output of the CloudFormation template for DNS and load balancing.
    19 The ARN for external API load balancer target group.
    20 Specify the ExternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing.
    21 The ARN for internal API load balancer target group.
    22 Specify the InternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing.
    23 The ARN for internal service load balancer target group.
    24 Specify the InternalServiceTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing.
  3. Copy the template from the CloudFormation template for the bootstrap machine section of this topic and save it as a YAML file on your computer. This template describes the bootstrap machine that your cluster requires.

  4. Launch the template:

    You must enter the command on a single line.

    $ aws cloudformation create-stack --stack-name <name> (1)
         --template-body file://<template>.yaml (2)
         --parameters file://<parameters>.json (3)
         --capabilities CAPABILITY_NAMED_IAM
    1 <name> is the name for the CloudFormation stack, such as cluster-bootstrap. You need the name of this stack if you remove the cluster.
    2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved.
    3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file.
  5. Confirm that the template components exist:

    $ aws cloudformation describe-stacks --stack-name <name>

    After the StackStatus displays CREATE_COMPLETE, the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster:

    BootstrapInstanceId

    The bootstrap Instance ID.

    BootstrapPublicIp

    The bootstrap node public IP address.

    BootstrapPrivateIp

    The bootstrap node private IP address.

CloudFormation template for the bootstrap machine

You can use the following CloudFormation template to deploy the bootstrap machine that you need for your OpenShift Container Platform cluster.

AWSTemplateFormatVersion: 2010-09-09
Description: Template for OpenShift Cluster Bootstrap (EC2 Instance, Security Groups and IAM)

Parameters:
  InfrastructureName:
    AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})$
    MaxLength: 27
    MinLength: 1
    ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters.
    Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster.
    Type: String
  RhcosAmi:
    Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap.
    Type: AWS::EC2::Image::Id
  AllowedBootstrapSshCidr:
    AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/([0-9]|1[0-9]|2[0-9]|3[0-2]))$
    ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/0-32.
    Default: 0.0.0.0/0
    Description: CIDR block to allow SSH access to the bootstrap node.
    Type: String
  PublicSubnet:
    Description: The public subnet to launch the bootstrap node into.
    Type: AWS::EC2::Subnet::Id
  MasterSecurityGroupId:
    Description: The master security group ID for registering temporary rules.
    Type: AWS::EC2::SecurityGroup::Id
  VpcId:
    Description: The VPC-scoped resources will belong to this VPC.
    Type: AWS::EC2::VPC::Id
  BootstrapIgnitionLocation:
    Default: s3://my-s3-bucket/bootstrap.ign
    Description: Ignition config file location.
    Type: String
  AutoRegisterELB:
    Default: "yes"
    AllowedValues:
    - "yes"
    - "no"
    Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter?
    Type: String
  RegisterNlbIpTargetsLambdaArn:
    Description: ARN for NLB IP target registration lambda.
    Type: String
  ExternalApiTargetGroupArn:
    Description: ARN for external API load balancer target group.
    Type: String
  InternalApiTargetGroupArn:
    Description: ARN for internal API load balancer target group.
    Type: String
  InternalServiceTargetGroupArn:
    Description: ARN for internal service load balancer target group.
    Type: String

Metadata:
  AWS::CloudFormation::Interface:
    ParameterGroups:
    - Label:
        default: "Cluster Information"
      Parameters:
      - InfrastructureName
    - Label:
        default: "Host Information"
      Parameters:
      - RhcosAmi
      - BootstrapIgnitionLocation
      - MasterSecurityGroupId
    - Label:
        default: "Network Configuration"
      Parameters:
      - VpcId
      - AllowedBootstrapSshCidr
      - PublicSubnet
    - Label:
        default: "Load Balancer Automation"
      Parameters:
      - AutoRegisterELB
      - RegisterNlbIpTargetsLambdaArn
      - ExternalApiTargetGroupArn
      - InternalApiTargetGroupArn
      - InternalServiceTargetGroupArn
    ParameterLabels:
      InfrastructureName:
        default: "Infrastructure Name"
      VpcId:
        default: "VPC ID"
      AllowedBootstrapSshCidr:
        default: "Allowed SSH Source"
      PublicSubnet:
        default: "Public Subnet"
      RhcosAmi:
        default: "Red Hat Enterprise Linux CoreOS AMI ID"
      BootstrapIgnitionLocation:
        default: "Bootstrap Ignition Source"
      MasterSecurityGroupId:
        default: "Master Security Group ID"
      AutoRegisterELB:
        default: "Use Provided ELB Automation"

Conditions:
  DoRegistration: !Equals ["yes", !Ref AutoRegisterELB]

Resources:
  BootstrapIamRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
        - Effect: "Allow"
          Principal:
            Service:
            - "ec2.amazonaws.com"
          Action:
          - "sts:AssumeRole"
      Path: "/"
      Policies:
      - PolicyName: !Join ["-", [!Ref InfrastructureName, "bootstrap", "policy"]]
        PolicyDocument:
          Version: "2012-10-17"
          Statement:
          - Effect: "Allow"
            Action: "ec2:Describe*"
            Resource: "*"
          - Effect: "Allow"
            Action: "ec2:AttachVolume"
            Resource: "*"
          - Effect: "Allow"
            Action: "ec2:DetachVolume"
            Resource: "*"
          - Effect: "Allow"
            Action: "s3:GetObject"
            Resource: "*"

  BootstrapInstanceProfile:
    Type: "AWS::IAM::InstanceProfile"
    Properties:
      Path: "/"
      Roles:
      - Ref: "BootstrapIamRole"

  BootstrapSecurityGroup:
    Type: AWS::EC2::SecurityGroup
    Properties:
      GroupDescription: Cluster Bootstrap Security Group
      SecurityGroupIngress:
      - IpProtocol: tcp
        FromPort: 22
        ToPort: 22
        CidrIp: !Ref AllowedBootstrapSshCidr
      - IpProtocol: tcp
        ToPort: 19531
        FromPort: 19531
        CidrIp: 0.0.0.0/0
      VpcId: !Ref VpcId

  BootstrapInstance:
    Type: AWS::EC2::Instance
    Properties:
      ImageId: !Ref RhcosAmi
      IamInstanceProfile: !Ref BootstrapInstanceProfile
      InstanceType: "i3.large"
      NetworkInterfaces:
      - AssociatePublicIpAddress: "true"
        DeviceIndex: "0"
        GroupSet:
        - !Ref "BootstrapSecurityGroup"
        - !Ref "MasterSecurityGroupId"
        SubnetId: !Ref "PublicSubnet"
      UserData:
        Fn::Base64: !Sub
        - '{"ignition":{"config":{"replace":{"source":"${S3Loc}","verification":{}}},"timeouts":{},"version":"2.1.0"},"networkd":{},"passwd":{},"storage":{},"systemd":{}}'
        - {
          S3Loc: !Ref BootstrapIgnitionLocation
        }

  RegisterBootstrapApiTarget:
    Condition: DoRegistration
    Type: Custom::NLBRegister
    Properties:
      ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
      TargetArn: !Ref ExternalApiTargetGroupArn
      TargetIp: !GetAtt BootstrapInstance.PrivateIp

  RegisterBootstrapInternalApiTarget:
    Condition: DoRegistration
    Type: Custom::NLBRegister
    Properties:
      ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
      TargetArn: !Ref InternalApiTargetGroupArn
      TargetIp: !GetAtt BootstrapInstance.PrivateIp

  RegisterBootstrapInternalServiceTarget:
    Condition: DoRegistration
    Type: Custom::NLBRegister
    Properties:
      ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
      TargetArn: !Ref InternalServiceTargetGroupArn
      TargetIp: !GetAtt BootstrapInstance.PrivateIp

Outputs:
  BootstrapInstanceId:
    Description: Bootstrap Instance ID.
    Value: !Ref BootstrapInstance

  BootstrapPublicIp:
    Description: The bootstrap node public IP address.
    Value: !GetAtt BootstrapInstance.PublicIp

  BootstrapPrivateIp:
    Description: The bootstrap node private IP address.
    Value: !GetAtt BootstrapInstance.PrivateIp

Creating the control plane machines in AWS

You must create the control plane machines in Amazon Web Services (AWS) for your cluster to use. The easiest way to create these nodes is to modify the provided CloudFormation template.

If you do not use the provided CloudFormation template to create your control plane nodes, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.

Prerequisites
  • Configure an AWS account.

  • Generate the Ignition config files for your cluster.

  • Create and configure a VPC and assocated subnets in AWS.

  • Create and configure DNS, load balancers, and listeners in AWS.

  • Create control plane and compute roles.

  • Create the bootstrap machine.

Procedure
  1. Create a JSON file that contains the parameter values that the template requires:

    [
      {
        "ParameterKey": "InfrastructureName", (1)
        "ParameterValue": "mycluster-<random_string>" (2)
      },
      {
        "ParameterKey": "RhcosAmi", (3)
        "ParameterValue": "ami-<random_string>" (4)
      },
      {
        "ParameterKey": "AutoRegisterDNS", (5)
        "ParameterValue": "yes" (6)
      },
      {
        "ParameterKey": "PrivateHostedZoneId", (7)
        "ParameterValue": "<random_string>" (8)
      },
      {
        "ParameterKey": "PrivateHostedZoneName", (9)
        "ParameterValue": "mycluster.example.com" (10)
      },
      {
        "ParameterKey": "Master0Subnet", (11)
        "ParameterValue": "subnet-<random_string>" (12)
      },
      {
        "ParameterKey": "Master1Subnet", (11)
        "ParameterValue": "subnet-<random_string>" (12)
      },
      {
        "ParameterKey": "Master2Subnet", (11)
        "ParameterValue": "subnet-<random_string>" (12)
      },
      {
        "ParameterKey": "MasterSecurityGroupId", (13)
        "ParameterValue": "sg-<random_string>" (14)
      },
      {
        "ParameterKey": "IgnitionLocation", (15)
        "ParameterValue": "https://api-int.<cluster_name>.<domain_name>:22623/config/master" (16)
      },
      {
        "ParameterKey": "CertificateAuthorities", (17)
        "ParameterValue": "data:text/plain;charset=utf-8;base64,ABC...xYz==" (18)
      },
      {
        "ParameterKey": "MasterInstanceProfileName", (19)
        "ParameterValue": "<roles_stack>-MasterInstanceProfile-<random_string>" (20)
      },
      {
        "ParameterKey": "MasterInstanceType", (21)
        "ParameterValue": "m4.xlarge" (22)
      },
      {
        "ParameterKey": "AutoRegisterELB", (23)
        "ParameterValue": "yes" (24)
      },
      {
        "ParameterKey": "RegisterNlbIpTargetsLambdaArn", (25)
        "ParameterValue": "arn:aws:lambda:<region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>" (26)
      },
      {
        "ParameterKey": "ExternalApiTargetGroupArn", (27)
        "ParameterValue": "arn:aws:elasticloadbalancing:<region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>" (28)
      },
      {
        "ParameterKey": "InternalApiTargetGroupArn", (29)
        "ParameterValue": "arn:aws:elasticloadbalancing:<region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" (30)
      },
      {
        "ParameterKey": "InternalServiceTargetGroupArn", (31)
        "ParameterValue": "arn:aws:elasticloadbalancing:<region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" (32)
      }
    ]
    1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster.
    2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string>.
    3 CurrentRed Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the control plane machines.
    4 Specify an AWS::EC2::Image::Id value.
    5 Whether or not to perform DNS etcd registration.
    6 Specify yes or no. If you specify yes, you must provide Hosted Zone information.
    7 The Route53 private zone ID to register the etcd targets with.
    8 Specify the PrivateHostedZoneId value from the output of the CloudFormation template for DNS and load balancing.
    9 The Route53 zone to register the targets with.
    10 Specify <cluster_name>.<domain_name> where <domain_name> is the Route53 base domain that you used when you generated install-config.yaml file for the cluster. Do not include the trailing period (.) that is displayed in the AWS console.
    11 A subnet, preferably private, to launch the control plane machines on.
    12 Specify a subnet from the PrivateSubnets value from the output of the CloudFormation template for DNS and load balancing.
    13 The master security group ID to associate with master nodes.
    14 Specify the MasterSecurityGroupId value from the output of the CloudFormation template for the security group and roles.
    15 The location to fetch control plane Ignition config file from.
    16 Specify the generated Ignition config file location, https://api-int.<cluster_name>.<domain_name>:22623/config/master.
    17 The base64 encoded certificate authority string to use.
    18 Specify the value from the master.ign file that is in the installation directory. This value is the long string with the format data:text/plain;charset=utf-8;base64,ABC…​xYz==.
    19 The IAM profile to associate with master nodes.
    20 Specify the MasterInstanceProfile parameter value from the output of the CloudFormation template for the security group and roles.
    21 The type of AWS instance to use for the control plane machines.
    22 Allowed values:
    • m4.xlarge

    • m4.2xlarge

    • m4.4xlarge

    • m4.8xlarge

    • m4.10xlarge

    • m4.16xlarge

    • c4.2xlarge

    • c4.4xlarge

    • c4.8xlarge

    • r4.xlarge

    • r4.2xlarge

    • r4.4xlarge

    • r4.8xlarge

    • r4.16xlarge

      If m4 instance types are not available in your region, such as with eu-west-3, specify an m5 type, such as m5.xlarge, instead.

    23 Whether or not to register a network load balancer (NLB).
    24 Specify yes or no. If you specify yes, you must provide a Lambda Amazon Resource Name (ARN) value.
    25 The ARN for NLB IP target registration lambda group.
    26 Specify the RegisterNlbIpTargetsLambda value from the output of the CloudFormation template for DNS and load balancing.
    27 The ARN for external API load balancer target group.
    28 Specify the ExternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing.
    29 The ARN for internal API load balancer target group.
    30 Specify the InternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing.
    31 The ARN for internal service load balancer target group.
    32 Specify the InternalServiceTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing.
  2. Copy the template from the CloudFormation template for control plane machines section of this topic and save it as a YAML file on your computer. This template describes the control plane machines that your cluster requires.

  3. If you specified an m5 instance type as the value for MasterInstanceType, add that instance type to the MasterInstanceType.AllowedValues parameter in the CloudFormation template.

  4. Launch the template:

    You must enter the command on a single line.

    $ aws cloudformation create-stack --stack-name <name> (1)
         --template-body file://<template>.yaml (2)
         --parameters file://<parameters>.json (3)
    1 <name> is the name for the CloudFormation stack, such as cluster-control-plane. You need the name of this stack if you remove the cluster.
    2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved.
    3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file.
  5. Confirm that the template components exist:

    $ aws cloudformation describe-stacks --stack-name <name>

CloudFormation template for control plane machines

You can use the following CloudFormation template to deploy the control plane machines that you need for your OpenShift Container Platform cluster.

AWSTemplateFormatVersion: 2010-09-09
Description: Template for OpenShift Cluster Node Launch (EC2 master instances)

Parameters:
  InfrastructureName:
    AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})$
    MaxLength: 27
    MinLength: 1
    ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters.
    Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider.
    Type: String
  RhcosAmi:
    Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap.
    Type: AWS::EC2::Image::Id
  AutoRegisterDNS:
    Default: "yes"
    AllowedValues:
    - "yes"
    - "no"
    Description: Do you want to invoke DNS etcd registration, which requires Hosted Zone information?
    Type: String
  PrivateHostedZoneId:
    Description: The Route53 private zone ID to register the etcd targets with, such as Z21IXYZABCZ2A4.
    Type: String
  PrivateHostedZoneName:
    Description: The Route53 zone to register the targets with, such as cluster.example.com. Omit the trailing period.
    Type: String
  Master0Subnet:
    Description: The subnets, recommend private, to launch the master nodes into.
    Type: AWS::EC2::Subnet::Id
  Master1Subnet:
    Description: The subnets, recommend private, to launch the master nodes into.
    Type: AWS::EC2::Subnet::Id
  Master2Subnet:
    Description: The subnets, recommend private, to launch the master nodes into.
    Type: AWS::EC2::Subnet::Id
  MasterSecurityGroupId:
    Description: The master security group ID to associate with master nodes.
    Type: AWS::EC2::SecurityGroup::Id
  IgnitionLocation:
    Default: https://api-int.$CLUSTER_NAME.$DOMAIN:22623/config/master
    Description: Ignition config file location.
    Type: String
  CertificateAuthorities:
    Default: data:text/plain;charset=utf-8;base64,ABC...xYz==
    Description: Base64 encoded certificate authority string to use.
    Type: String
  MasterInstanceProfileName:
    Description: IAM profile to associate with master nodes.
    Type: String
  MasterInstanceType:
    Default: m4.xlarge
    Type: String
    AllowedValues:
    - "m4.xlarge"
    - "m4.2xlarge"
    - "m4.4xlarge"
    - "m4.8xlarge"
    - "m4.10xlarge"
    - "m4.16xlarge"
    - "c4.2xlarge"
    - "c4.4xlarge"
    - "c4.8xlarge"
    - "r4.xlarge"
    - "r4.2xlarge"
    - "r4.4xlarge"
    - "r4.8xlarge"
    - "r4.16xlarge"
  AutoRegisterELB:
    Default: "yes"
    AllowedValues:
    - "yes"
    - "no"
    Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter?
    Type: String
  RegisterNlbIpTargetsLambdaArn:
    Description: ARN for NLB IP target registration lambda. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB.
    Type: String
  ExternalApiTargetGroupArn:
    Description: ARN for external API load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB.
    Type: String
  InternalApiTargetGroupArn:
    Description: ARN for internal API load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB.
    Type: String
  InternalServiceTargetGroupArn:
    Description: ARN for internal service load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB.
    Type: String

Metadata:
  AWS::CloudFormation::Interface:
    ParameterGroups:
    - Label:
        default: "Cluster Information"
      Parameters:
      - InfrastructureName
    - Label:
        default: "Host Information"
      Parameters:
      - MasterInstanceType
      - RhcosAmi
      - IgnitionLocation
      - CertificateAuthorities
      - MasterSecurityGroupId
      - MasterInstanceProfileName
    - Label:
        default: "Network Configuration"
      Parameters:
      - VpcId
      - AllowedBootstrapSshCidr
      - Master0Subnet
      - Master1Subnet
      - Master2Subnet
    - Label:
        default: "DNS"
      Parameters:
      - AutoRegisterDNS
      - PrivateHostedZoneName
      - PrivateHostedZoneId
    - Label:
        default: "Load Balancer Automation"
      Parameters:
      - AutoRegisterELB
      - RegisterNlbIpTargetsLambdaArn
      - ExternalApiTargetGroupArn
      - InternalApiTargetGroupArn
      - InternalServiceTargetGroupArn
    ParameterLabels:
      InfrastructureName:
        default: "Infrastructure Name"
      VpcId:
        default: "VPC ID"
      Master0Subnet:
        default: "Master-0 Subnet"
      Master1Subnet:
        default: "Master-1 Subnet"
      Master2Subnet:
        default: "Master-2 Subnet"
      MasterInstanceType:
        default: "Master Instance Type"
      MasterInstanceProfileName:
        default: "Master Instance Profile Name"
      RhcosAmi:
        default: "Red Hat Enterprise Linux CoreOS AMI ID"
      BootstrapIgnitionLocation:
        default: "Master Ignition Source"
      CertificateAuthorities:
        default: "Ignition CA String"
      MasterSecurityGroupId:
        default: "Master Security Group ID"
      AutoRegisterDNS:
        default: "Use Provided DNS Automation"
      AutoRegisterELB:
        default: "Use Provided ELB Automation"
      PrivateHostedZoneName:
        default: "Private Hosted Zone Name"
      PrivateHostedZoneId:
        default: "Private Hosted Zone ID"

Conditions:
  DoRegistration: !Equals ["yes", !Ref AutoRegisterELB]
  DoDns: !Equals ["yes", !Ref AutoRegisterDNS]

Resources:
  Master0:
    Type: AWS::EC2::Instance
    Properties:
      ImageId: !Ref RhcosAmi
      BlockDeviceMappings:
      - DeviceName: /dev/xvda
        Ebs:
          VolumeSize: "120"
          VolumeType: "gp2"
      IamInstanceProfile: !Ref MasterInstanceProfileName
      InstanceType: !Ref MasterInstanceType
      NetworkInterfaces:
      - AssociatePublicIpAddress: "false"
        DeviceIndex: "0"
        GroupSet:
        - !Ref "MasterSecurityGroupId"
        SubnetId: !Ref "Master0Subnet"
      UserData:
        Fn::Base64: !Sub
        - '{"ignition":{"config":{"append":[{"source":"${SOURCE}","verification":{}}]},"security":{"tls":{"certificateAuthorities":[{"source":"${CA_BUNDLE}","verification":{}}]}},"timeouts":{},"version":"2.2.0"},"networkd":{},"passwd":{},"storage":{},"systemd":{}}'
        - {
          SOURCE: !Ref IgnitionLocation,
          CA_BUNDLE: !Ref CertificateAuthorities,
        }
      Tags:
      - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]]
        Value: "shared"

  RegisterMaster0:
    Condition: DoRegistration
    Type: Custom::NLBRegister
    Properties:
      ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
      TargetArn: !Ref ExternalApiTargetGroupArn
      TargetIp: !GetAtt Master0.PrivateIp

  RegisterMaster0InternalApiTarget:
    Condition: DoRegistration
    Type: Custom::NLBRegister
    Properties:
      ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
      TargetArn: !Ref InternalApiTargetGroupArn
      TargetIp: !GetAtt Master0.PrivateIp

  RegisterMaster0InternalServiceTarget:
    Condition: DoRegistration
    Type: Custom::NLBRegister
    Properties:
      ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
      TargetArn: !Ref InternalServiceTargetGroupArn
      TargetIp: !GetAtt Master0.PrivateIp

  Master1:
    Type: AWS::EC2::Instance
    Properties:
      ImageId: !Ref RhcosAmi
      BlockDeviceMappings:
      - DeviceName: /dev/xvda
        Ebs:
          VolumeSize: "120"
          VolumeType: "gp2"
      IamInstanceProfile: !Ref MasterInstanceProfileName
      InstanceType: !Ref MasterInstanceType
      NetworkInterfaces:
      - AssociatePublicIpAddress: "false"
        DeviceIndex: "0"
        GroupSet:
        - !Ref "MasterSecurityGroupId"
        SubnetId: !Ref "Master1Subnet"
      UserData:
        Fn::Base64: !Sub
        - '{"ignition":{"config":{"append":[{"source":"${SOURCE}","verification":{}}]},"security":{"tls":{"certificateAuthorities":[{"source":"${CA_BUNDLE}","verification":{}}]}},"timeouts":{},"version":"2.2.0"},"networkd":{},"passwd":{},"storage":{},"systemd":{}}'
        - {
          SOURCE: !Ref IgnitionLocation,
          CA_BUNDLE: !Ref CertificateAuthorities,
        }
      Tags:
      - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]]
        Value: "shared"

  RegisterMaster1:
    Condition: DoRegistration
    Type: Custom::NLBRegister
    Properties:
      ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
      TargetArn: !Ref ExternalApiTargetGroupArn
      TargetIp: !GetAtt Master1.PrivateIp

  RegisterMaster1InternalApiTarget:
    Condition: DoRegistration
    Type: Custom::NLBRegister
    Properties:
      ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
      TargetArn: !Ref InternalApiTargetGroupArn
      TargetIp: !GetAtt Master1.PrivateIp

  RegisterMaster1InternalServiceTarget:
    Condition: DoRegistration
    Type: Custom::NLBRegister
    Properties:
      ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
      TargetArn: !Ref InternalServiceTargetGroupArn
      TargetIp: !GetAtt Master1.PrivateIp

  Master2:
    Type: AWS::EC2::Instance
    Properties:
      ImageId: !Ref RhcosAmi
      BlockDeviceMappings:
      - DeviceName: /dev/xvda
        Ebs:
          VolumeSize: "120"
          VolumeType: "gp2"
      IamInstanceProfile: !Ref MasterInstanceProfileName
      InstanceType: !Ref MasterInstanceType
      NetworkInterfaces:
      - AssociatePublicIpAddress: "false"
        DeviceIndex: "0"
        GroupSet:
        - !Ref "MasterSecurityGroupId"
        SubnetId: !Ref "Master2Subnet"
      UserData:
        Fn::Base64: !Sub
        - '{"ignition":{"config":{"append":[{"source":"${SOURCE}","verification":{}}]},"security":{"tls":{"certificateAuthorities":[{"source":"${CA_BUNDLE}","verification":{}}]}},"timeouts":{},"version":"2.2.0"},"networkd":{},"passwd":{},"storage":{},"systemd":{}}'
        - {
          SOURCE: !Ref IgnitionLocation,
          CA_BUNDLE: !Ref CertificateAuthorities,
        }
      Tags:
      - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]]
        Value: "shared"

  RegisterMaster2:
    Condition: DoRegistration
    Type: Custom::NLBRegister
    Properties:
      ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
      TargetArn: !Ref ExternalApiTargetGroupArn
      TargetIp: !GetAtt Master2.PrivateIp

  RegisterMaster2InternalApiTarget:
    Condition: DoRegistration
    Type: Custom::NLBRegister
    Properties:
      ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
      TargetArn: !Ref InternalApiTargetGroupArn
      TargetIp: !GetAtt Master2.PrivateIp

  RegisterMaster2InternalServiceTarget:
    Condition: DoRegistration
    Type: Custom::NLBRegister
    Properties:
      ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
      TargetArn: !Ref InternalServiceTargetGroupArn
      TargetIp: !GetAtt Master2.PrivateIp

  EtcdSrvRecords:
    Condition: DoDns
    Type: AWS::Route53::RecordSet
    Properties:
      HostedZoneId: !Ref PrivateHostedZoneId
      Name: !Join [".", ["_etcd-server-ssl._tcp", !Ref PrivateHostedZoneName]]
      ResourceRecords:
      - !Join [
        " ",
        ["0 10 2380", !Join [".", ["etcd-0", !Ref PrivateHostedZoneName]]],
      ]
      - !Join [
        " ",
        ["0 10 2380", !Join [".", ["etcd-1", !Ref PrivateHostedZoneName]]],
      ]
      - !Join [
        " ",
        ["0 10 2380", !Join [".", ["etcd-2", !Ref PrivateHostedZoneName]]],
      ]
      TTL: 60
      Type: SRV

  Etcd0Record:
    Condition: DoDns
    Type: AWS::Route53::RecordSet
    Properties:
      HostedZoneId: !Ref PrivateHostedZoneId
      Name: !Join [".", ["etcd-0", !Ref PrivateHostedZoneName]]
      ResourceRecords:
      - !GetAtt Master0.PrivateIp
      TTL: 60
      Type: A

  Etcd1Record:
    Condition: DoDns
    Type: AWS::Route53::RecordSet
    Properties:
      HostedZoneId: !Ref PrivateHostedZoneId
      Name: !Join [".", ["etcd-1", !Ref PrivateHostedZoneName]]
      ResourceRecords:
      - !GetAtt Master1.PrivateIp
      TTL: 60
      Type: A

  Etcd2Record:
    Condition: DoDns
    Type: AWS::Route53::RecordSet
    Properties:
      HostedZoneId: !Ref PrivateHostedZoneId
      Name: !Join [".", ["etcd-2", !Ref PrivateHostedZoneName]]
      ResourceRecords:
      - !GetAtt Master2.PrivateIp
      TTL: 60
      Type: A

Outputs:
  PrivateIPs:
    Description: The control-plane node private IP addresses.
    Value:
      !Join [
        ",",
        [!GetAtt Master0.PrivateIp, !GetAtt Master1.PrivateIp, !GetAtt Master2.PrivateIp]
      ]

Initializing the bootstrap node on AWS with user-provisioned infrastructure

After you create all of the required infrastructure in Amazon Web Services (AWS), you can install the cluster.

Prerequisites
  • Configure an AWS account.

  • Generate the Ignition config files for your cluster.

  • Create and configure a VPC and assocated subnets in AWS.

  • Create and configure DNS, load balancers, and listeners in AWS.

  • Create control plane and compute roles.

  • Create the bootstrap machine.

  • Create the control plane machines.

  • If you plan to manually manage the worker machines, create the worker machines.

Procedure
  1. Change to the directory that contains the installation program and run the following command:

    $ ./openshift-install wait-for bootstrap-complete --dir=<installation_directory> \ (1)
        --log-level=info (2)
    1 For <installation_directory>, specify the path to the directory that you stored the installation files in.
    2 To view different installation details, specify warn, debug, or error instead of info.

    If the command exits without a FATAL warning, your production control plane has initialized.

Creating the worker nodes in AWS

You can create worker nodes in Amazon Web Services (AWS) for your cluster to use. The easiest way to manually create these nodes is to modify the provided CloudFormation template.

The CloudFormation template creates a stack that represents one worker machine. You must create a stack for each worker machine.

If you do not use the provided CloudFormation template to create your worker nodes, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.

Prerequisites
  • Configure an AWS account.

  • Generate the Ignition config files for your cluster.

  • Create and configure a VPC and assocated subnets in AWS.

  • Create and configure DNS, load balancers, and listeners in AWS.

  • Create control plane and compute roles.

  • Create the bootstrap machine.

  • Create the control plane machines.

Procedure
  1. Create a JSON file that contains the parameter values that the CloudFormation template requires:

    [
      {
        "ParameterKey": "InfrastructureName", (1)
        "ParameterValue": "mycluster-<random_string>" (2)
      },
      {
        "ParameterKey": "RhcosAmi", (3)
        "ParameterValue": "ami-<random_string>" (4)
      },
      {
        "ParameterKey": "Subnet", (5)
        "ParameterValue": "subnet-<random_string>" (6)
      },
      {
        "ParameterKey": "WorkerSecurityGroupId", (7)
        "ParameterValue": "sg-<random_string>" (8)
      },
      {
        "ParameterKey": "IgnitionLocation", (9)
        "ParameterValue": "https://api-int.<cluster_name>.<domain_name>:22623/config/worker" (10)
      },
      {
        "ParameterKey": "CertificateAuthorities", (11)
        "ParameterValue": "" (12)
      },
      {
        "ParameterKey": "WorkerInstanceProfileName", (13)
        "ParameterValue": "" (14)
      },
      {
        "ParameterKey": "WorkerInstanceType", (15)
        "ParameterValue": "m4.large" (16)
      }
    ]
    1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster.
    2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string>.
    3 Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the worker nodes.
    4 Specify an AWS::EC2::Image::Id value.
    5 A subnet, preferably private, to launch the worker nodes on.
    6 Specify a subnet from the PrivateSubnets value from the output of the CloudFormation template for DNS and load balancing.
    7 The worker security group ID to associate with worker nodes.
    8 Specify the WorkerSecurityGroupId value from the output of the CloudFormation template for the security group and roles.
    9 The location to fetch bootstrap Ignition config file from.
    10 Specify the generated Ignition config location, https://api-int.<cluster_name>.<domain_name>:22623/config/worker.
    11 Base64 encoded certificate authority string to use.
    12 Specify the value from the worker.ign file that is in the installation directory. This value is the long string with the format data:text/plain;charset=utf-8;base64,ABC…​xYz==.
    13 The IAM profile to associate with worker nodes.
    14 Specify the WorkerInstanceProfile parameter value from the output of the CloudFormation template for the security group and roles.
    15 The type of AWS instance to use for the control plane machines.
    16 Allowed values:
    • m4.large

    • m4.xlarge

    • m4.2xlarge

    • m4.4xlarge

    • m4.8xlarge

    • m4.10xlarge

    • m4.16xlarge

    • c4.large

    • c4.xlarge

    • c4.2xlarge

    • c4.4xlarge

    • c4.8xlarge

    • r4.large

    • r4.xlarge

    • r4.2xlarge

    • r4.4xlarge

    • r4.8xlarge

    • r4.16xlarge

      If m4 instance types are not available in your region, such as with eu-west-3, use m5 types instead.

  2. Copy the template from the CloudFormation template for worker machines section of this topic and save it as a YAML file on your computer. This template describes the networking objects and load balancers that your cluster requires.

  3. If you specified an m5 instance type as the value for WorkerInstanceType, add that instance type to the WorkerInstanceType.AllowedValues parameter in the CloudFormation template.

  4. Create a worker stack.

    1. Launch the template:

      You must enter the command on a single line.

      $ aws cloudformation create-stack --stack-name <name> (1)
           --template-body file://<template>.yaml \ (2)
           --parameters file://<parameters>.json (3)
      1 <name> is the name for the CloudFormation stack, such as cluster-workers. You need the name of this stack if you remove the cluster.
      2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved.
      3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file.
    2. Confirm that the template components exist:

      $ aws cloudformation describe-stacks --stack-name <name>
  5. Continue to create worker stacks until you have created enough worker Machines for your cluster.

    You must create at least two worker machines, so you must create at least two stacks that use this CloudFormation template.

CloudFormation template for worker machines

You can use the following CloudFormation template to deploy the worker machines that you need for your OpenShift Container Platform cluster.

AWSTemplateFormatVersion: 2010-09-09
Description: Template for OpenShift Cluster Node Launch (EC2 worker instance)

Parameters:
  InfrastructureName:
    AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})$
    MaxLength: 27
    MinLength: 1
    ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters.
    Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider.
    Type: String
  RhcosAmi:
    Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap.
    Type: AWS::EC2::Image::Id
  Subnet:
    Description: The subnets, recommend private, to launch the master nodes into.
    Type: AWS::EC2::Subnet::Id
  WorkerSecurityGroupId:
    Description: The master security group ID to associate with master nodes.
    Type: AWS::EC2::SecurityGroup::Id
  IgnitionLocation:
    Default: https://api-int.$CLUSTER_NAME.$DOMAIN:22623/config/worker
    Description: Ignition config file location.
    Type: String
  CertificateAuthorities:
    Default: data:text/plain;charset=utf-8;base64,ABC...xYz==
    Description: Base64 encoded certificate authority string to use.
    Type: String
  WorkerInstanceProfileName:
    Description: IAM profile to associate with master nodes.
    Type: String
  WorkerInstanceType:
    Default: m4.large
    Type: String
    AllowedValues:
    - "m4.large"
    - "m4.xlarge"
    - "m4.2xlarge"
    - "m4.4xlarge"
    - "m4.8xlarge"
    - "m4.10xlarge"
    - "m4.16xlarge"
    - "c4.large"
    - "c4.xlarge"
    - "c4.2xlarge"
    - "c4.4xlarge"
    - "c4.8xlarge"
    - "r4.large"
    - "r4.xlarge"
    - "r4.2xlarge"
    - "r4.4xlarge"
    - "r4.8xlarge"
    - "r4.16xlarge"

Metadata:
  AWS::CloudFormation::Interface:
    ParameterGroups:
    - Label:
        default: "Cluster Information"
      Parameters:
      - InfrastructureName
    - Label:
        default: "Host Information"
      Parameters:
      - WorkerInstanceType
      - RhcosAmi
      - IgnitionLocation
      - CertificateAuthorities
      - WorkerSecurityGroupId
      - WorkerInstanceProfileName
    - Label:
        default: "Network Configuration"
      Parameters:
      - Subnet
    ParameterLabels:
      Subnet:
        default: "Subnet"
      InfrastructureName:
        default: "Infrastructure Name"
      WorkerInstanceType:
        default: "Worker Instance Type"
      WorkerInstanceProfileName:
        default: "Worker Instance Profile Name"
      RhcosAmi:
        default: "Red Hat Enterprise Linux CoreOS AMI ID"
      IgnitionLocation:
        default: "Worker Ignition Source"
      CertificateAuthorities:
        default: "Ignition CA String"
      WorkerSecurityGroupId:
        default: "Worker Security Group ID"

Resources:
  Worker0:
    Type: AWS::EC2::Instance
    Properties:
      ImageId: !Ref RhcosAmi
      BlockDeviceMappings:
      - DeviceName: /dev/xvda
        Ebs:
          VolumeSize: "120"
          VolumeType: "gp2"
      IamInstanceProfile: !Ref WorkerInstanceProfileName
      InstanceType: !Ref WorkerInstanceType
      NetworkInterfaces:
      - AssociatePublicIpAddress: "false"
        DeviceIndex: "0"
        GroupSet:
        - !Ref "WorkerSecurityGroupId"
        SubnetId: !Ref "Subnet"
      UserData:
        Fn::Base64: !Sub
        - '{"ignition":{"config":{"append":[{"source":"${SOURCE}","verification":{}}]},"security":{"tls":{"certificateAuthorities":[{"source":"${CA_BUNDLE}","verification":{}}]}},"timeouts":{},"version":"2.2.0"},"networkd":{},"passwd":{},"storage":{},"systemd":{}}'
        - {
          SOURCE: !Ref IgnitionLocation,
          CA_BUNDLE: !Ref CertificateAuthorities,
        }
      Tags:
      - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]]
        Value: "shared"

Outputs:
  PrivateIP:
    Description: The compute node private IP address.
    Value: !GetAtt Worker0.PrivateIp

Installing the CLI

You can install the CLI in order to interact with OpenShift Container Platform using a command-line interface.

If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform 4.2. Download and install the new version of oc.

Procedure
  1. From the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site, navigate to the page for your installation type and click Download Command-line Tools.

  2. Click the folder for your operating system and architecture and click the compressed file.

    You can install oc on Linux, Windows, or macOS.

  3. Save the file to your file system.

  4. Extract the compressed file.

  5. Place it in a directory that is on your PATH.

After you install the CLI, it is available using the oc command:

$ oc <command>

Logging in to the cluster

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.

Prerequisites
  • Deploy an OpenShift Container Platform cluster.

  • Install the oc CLI.

Procedure
  1. Export the kubeadmin credentials:

    $ export KUBECONFIG=<installation_directory>/auth/kubeconfig (1)
    1 For <installation_directory>, specify the path to the directory that you stored the installation files in.
  2. Verify you can run oc commands successfully using the exported configuration:

    $ oc whoami
    system:admin

Approving the CSRs for your machines

When you add machines to a cluster, two pending certificates signing request (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself.

Prerequisites
  • You added machines to your cluster.

  • Install the jq package.

Procedure
  1. Confirm that the cluster recognizes the machines:

    $ oc get nodes
    
    NAME      STATUS    ROLES   AGE  VERSION
    master-0  Ready     master  63m  v1.14.6+c4799753c
    master-1  Ready     master  63m  v1.14.6+c4799753c
    master-2  Ready     master  64m  v1.14.6+c4799753c
    worker-0  NotReady  worker  76s  v1.14.6+c4799753c
    worker-1  NotReady  worker  70s  v1.14.6+c4799753c

    The output lists all of the machines that you created.

  2. Review the pending certificate signing requests (CSRs) and ensure that the you see a client and server request with Pending or Approved status for each machine that you added to the cluster:

    $ oc get csr
    
    NAME        AGE     REQUESTOR                                                                   CONDITION
    csr-8b2br   15m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending (1)
    csr-8vnps   15m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending
    csr-bfd72   5m26s   system:node:ip-10-0-50-126.us-east-2.compute.internal                       Pending (2)
    csr-c57lv   5m26s   system:node:ip-10-0-95-157.us-east-2.compute.internal                       Pending
    ...
    1 A client request CSR.
    2 A server request CSR.

    In this example, two machines are joining the cluster. You might see more approved CSRs in the list.

  3. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines:

    Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After you approve the initial CSRs, the subsequent node client CSRs are automatically approved by the cluster kube-controller-manager. You must implement a method of automatically approving the kubelet serving certificate requests.

    • To approve them individually, run the following command for each valid CSR:

      $ oc adm certificate approve <csr_name> (1)
      1 <csr_name> is the name of a CSR from the list of current CSRs.
    • If all the CSRs are valid, approve them all by running the following command:

      $ oc get csr -ojson | jq -r '.items[] | select(.status == {} ) | .metadata.name' | xargs oc adm certificate approve

Initial Operator configuration

After the control plane initializes, you must immediately configure some Operators so that they all become available.

Prerequisites
  • Your control plane has initialized.

Procedure
  1. Watch the cluster components come online:

    $ watch -n5 oc get clusteroperators
    
    NAME                                 VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE
    authentication                       4.2.0     True        False         False      69s
    cloud-credential                     4.2.0     True        False         False      12m
    cluster-autoscaler                   4.2.0     True        False         False      11m
    console                              4.2.0     True        False         False      46s
    dns                                  4.2.0     True        False         False      11m
    image-registry                       4.2.0     False       True          False      5m26s
    ingress                              4.2.0     True        False         False      5m36s
    kube-apiserver                       4.2.0     True        False         False      8m53s
    kube-controller-manager              4.2.0     True        False         False      7m24s
    kube-scheduler                       4.2.0     True        False         False      12m
    machine-api                          4.2.0     True        False         False      12m
    machine-config                       4.2.0     True        False         False      7m36s
    marketplace                          4.2.0     True        False         False      7m54m
    monitoring                           4.2.0     True        False         False      7h54s
    network                              4.2.0     True        False         False      5m9s
    node-tuning                          4.2.0     True        False         False      11m
    openshift-apiserver                  4.2.0     True        False         False      11m
    openshift-controller-manager         4.2.0     True        False         False      5m943s
    openshift-samples                    4.2.0     True        False         False      3m55s
    operator-lifecycle-manager           4.2.0     True        False         False      11m
    operator-lifecycle-manager-catalog   4.2.0     True        False         False      11m
    service-ca                           4.2.0     True        False         False      11m
    service-catalog-apiserver            4.2.0     True        False         False      5m26s
    service-catalog-controller-manager   4.2.0     True        False         False      5m25s
    storage                              4.2.0     True        False         False      5m30s
  2. Configure the Operators that are not available.

Image registry storage configuration

If the image-registry Operator is not available, you must configure storage for it. Instructions for both configuring a PersistentVolume, which is required for production clusters, and for configuring an empty directory as the storage location, which is available for only non-production clusters, are shown.

Configuring registry storage for AWS with user-provisioned infrastructure

During installation, your cloud credentials are sufficient to create an S3 bucket and the Registry Operator will automatically configure storage.

If the Registry Operator cannot create an S3 bucket, and automatically configure storage, you can create an S3 bucket and configure storage with the following procedure.

Prerequisites
  • A cluster on AWS with user-provisioned infrastructure.

  • For S3 on AWS storage the secret is expected to contain two keys:

    • REGISTRY_STORAGE_S3_ACCESSKEY

    • REGISTRY_STORAGE_S3_SECRETKEY

Procedure

Use the following procedure if the Registry Operator cannot create an S3 bucket and automatically configure storage.

  1. Set up a Bucket Lifecycle Policy to abort incomplete multipart uploads that are one day old.

  2. Fill in the storage configuration in configs.imageregistry.operator.openshift.io/cluster:

    $ oc edit configs.imageregistry.operator.openshift.io/cluster
    
    storage:
      s3:
        bucket: <bucket-name>
        region: <region-name>

To secure your registry images in AWS, block public access to the S3 bucket.

Configuring storage for the image registry in non-production clusters

You must configure storage for the image registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry.

Procedure
  • To set the image registry storage to an empty directory:

    $ oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}'

    Configure this option for only non-production clusters.

    If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error:

    Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found

    Wait a few minutes and run the command again.

Deleting the bootstrap resources

After you complete the initial Operator configuration for the cluster, remove the bootstrap resources from Amazon Web Services (AWS).

Prerequisites
  • You completed the initial Operator configuration for your cluster.

Procedure
  1. Delete the bootstrap resources. If you used the CloudFormation template, delete its stack:

    $ aws cloudformation delete-stack --stack-name <name> (1)
    1 <name> is the name of your bootstrap stack.

Creating the Ingress DNS Records

If you removed the DNS Zone configuration, manually create DNS records that point to the Ingress load balancer. You can create either a wildcard record or specific records. While the following procedure uses A records, you can use other record types that you require, such as CNAME or alias.

Prerequisites
  • You deployed an OpenShift Container Platform cluster on Amazon Web Services (AWS) by using infrastructure that you provisioned.

  • Install the OpenShift Command-line Interface (CLI), commonly known as oc.

  • Install the jq package.

  • Download the AWS CLI and install it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or Unix).

Procedure
  1. Determine the routes to create.

    • To create a wildcard record, use *.apps.<cluster_name>.<domain_name>, where <cluster_name> is your cluster name, and <domain_name> is the Route53 base domain for your OpenShift Container Platform cluster.

    • To create specific records, you must create a record for each route that your cluster uses, as shown in the output of the following command:

      $ oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes
      oauth-openshift.apps.<cluster_name>.<domain_name>
      console-openshift-console.apps.<cluster_name>.<domain_name>
      downloads-openshift-console.apps.<cluster_name>.<domain_name>
      alertmanager-main-openshift-monitoring.apps.<cluster_name>.<domain_name>
      grafana-openshift-monitoring.apps.<cluster_name>.<domain_name>
      prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<domain_name>
  2. Retrieve the Ingress Operator load balancer status and note the value of the external IP address that it uses, which is shown in the EXTERNAL-IP column:

    $ oc -n openshift-ingress get service router-default
    NAME             TYPE           CLUSTER-IP      EXTERNAL-IP                            PORT(S)                      AGE
    router-default   LoadBalancer   172.30.62.215   ab3...28.us-east-2.elb.amazonaws.com   80:31499/TCP,443:30693/TCP   5m
  3. Locate the hosted zone ID for the load balancer:

    $ aws elb describe-load-balancers | jq -r '.LoadBalancerDescriptions[] | select(.DNSName == "<external_ip>").CanonicalHostedZoneNameID' (1)
    
    Z3AADJGX6KTTL2
    1 For <external_ip>, specify the value of the external IP address of the Ingress Operator load balancer that you obtained.

    The output of this command is the load balancer hosted zone ID.

  4. Obtain the public hosted zone ID for your cluster’s domain:

    $ aws route53 list-hosted-zones-by-name \
                --dns-name "<domain_name>" \ (1)
                --query 'HostedZones[? Config.PrivateZone != `true` && Name == `<domain_name>.`].Id' (1)
                --output text
    
    /hostedzone/Z3URY6TWQ91KVV
    1 For <domain_name>, specify the Route53 base domain for your OpenShift Container Platform cluster.

    The public hosted zone ID for your domain is shown in the command output. In this example, it is Z3URY6TWQ91KVV.

  5. Add the alias records to your private zone:

    $ aws route53 change-resource-record-sets --hosted-zone-id "<private_hosted_zone_id>" --change-batch '{ (1)
    >   "Changes": [
    >     {
    >       "Action": "CREATE",
    >       "ResourceRecordSet": {
    >         "Name": "\\052.apps.<cluster_domain>", (2)
    >         "Type": "A",
    >         "AliasTarget":{
    >           "HostedZoneId": "<hosted_zone_id>", (3)
    >           "DNSName": "<external_ip>.", (4)
    >           "EvaluateTargetHealth": false
    >         }
    >       }
    >     }
    >   ]
    > }'
    1 For <private_hosted_zone_id>, specify the value from the output of the CloudFormation template for DNS and load balancing.
    2 For <cluster_domain>, specify the domain or subdomain that you use with your OpenShift Container Platform cluster.
    3 For <hosted_zone_id>, specify the public hosted zone ID for the load balancer that you obtained.
    4 For <external_ip>, specify the value of the external IP address of the Ingress Operator load balancer. Ensure that you include the trailing period (.) in this parameter value.
  6. Add the records to your public zone:

    $ aws route53 change-resource-record-sets --hosted-zone-id "<public_hosted_zone_id>"" --change-batch '{ (1)
    >   "Changes": [
    >     {
    >       "Action": "CREATE",
    >       "ResourceRecordSet": {
    >         "Name": "\\052.apps.<cluster_domain>", (2)
    >         "Type": "A",
    >         "AliasTarget":{
    >           "HostedZoneId": "<hosted_zone_id>", (3)
    >           "DNSName": "<external_ip>.", (4)
    >           "EvaluateTargetHealth": false
    >         }
    >       }
    >     }
    >   ]
    > }'
    1 For <public_hosted_zone_id>, specify the public hosted zone for your domain.
    2 For <cluster_domain>, specify the domain or subdomain that you use with your OpenShift Container Platform cluster.
    3 For <hosted_zone_id>, specify the public hosted zone ID for the load balancer that you obtained.
    4 For <external_ip>, specify the value of the external IP address of the Ingress Operator load balancer. Ensure that you include the trailing period (.) in this parameter value.

Completing an AWS installation on user-provisioned infrastructure

After you start the OpenShift Container Platform installation on Amazon Web Service (AWS) user-provisioned infrastructure, monitor the deployment to completion.

Prerequisites
  • Removed the bootstrap node for an OpenShift Container Platform cluster on user-provisioned AWS infrastructure.

  • Install the oc CLI and log in.

Procedure
  • Complete the cluster installation:

    $ ./openshift-install --dir=<installation_directory> wait-for install-complete (1)
    
    INFO Waiting up to 30m0s for the cluster to initialize...
    1 For <installation_directory>, specify the path to the directory that you stored the installation files in.

    The Ignition config files that the installation program generates contain certificates that expire after 24 hours. You must keep the cluster running for 24 hours in a non-degraded state to ensure that the first certificate rotation has finished.

Next steps