×

In OpenShift Container Platform version 4.8, you can install a cluster on Amazon Web Services (AWS) that uses infrastructure that you provide.

One way to create this infrastructure is to use the provided CloudFormation templates. You can modify the templates to customize your infrastructure or use the information that they contain to create AWS objects according to your company’s policies.

The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the cloud provider and the installation process of OpenShift Container Platform. Several CloudFormation templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods; the templates are just an example.

Prerequisites

Internet access for OpenShift Container Platform

In OpenShift Container Platform 4.8, you require access to the internet to install your cluster.

You must have internet access to:

  • Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.

  • Access Quay.io to obtain the packages that are required to install your cluster.

  • Obtain the packages that are required to perform cluster updates.

If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.

Required AWS infrastructure components

To install OpenShift Container Platform on user-provisioned infrastructure in Amazon Web Services (AWS), you must manually create both the machines and their supporting infrastructure.

For more information about the integration testing for different platforms, see the OpenShift Container Platform 4.x Tested Integrations page.

By using the provided CloudFormation templates, you can create stacks of AWS resources that represent the following components:

  • An AWS Virtual Private Cloud (VPC)

  • Networking and load balancing components

  • Security groups and roles

  • An OpenShift Container Platform bootstrap node

  • OpenShift Container Platform control plane nodes

  • An OpenShift Container Platform compute node

Alternatively, you can manually create the components or you can reuse existing infrastructure that meets the cluster requirements. Review the CloudFormation templates for more details about how the components interrelate.

Other infrastructure components

  • A VPC

  • DNS entries

  • Load balancers (classic or network) and listeners

  • A public and a private Route 53 zone

  • Security groups

  • IAM roles

  • S3 buckets

If you are working in a disconnected environment or use a proxy, you cannot reach the public IP addresses for EC2 and ELB endpoints. To reach these endpoints, you must create a VPC endpoint and attach it to the subnet that the clusters are using. Create the following endpoints:

  • ec2.<region>.amazonaws.com

  • elasticloadbalancing.<region>.amazonaws.com

  • s3.<region>.amazonaws.com

Required VPC components

You must provide a suitable VPC and subnets that allow communication to your machines.

Component AWS type Description

VPC

  • AWS::EC2::VPC

  • AWS::EC2::VPCEndpoint

You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3.

Public subnets

  • AWS::EC2::Subnet

  • AWS::EC2::SubnetNetworkAclAssociation

Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules.

Internet gateway

  • AWS::EC2::InternetGateway

  • AWS::EC2::VPCGatewayAttachment

  • AWS::EC2::RouteTable

  • AWS::EC2::Route

  • AWS::EC2::SubnetRouteTableAssociation

  • AWS::EC2::NatGateway

  • AWS::EC2::EIP

You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios.

Network access control

  • AWS::EC2::NetworkAcl

  • AWS::EC2::NetworkAclEntry

You must allow the VPC to access the following ports:

Port

Reason

80

Inbound HTTP traffic

443

Inbound HTTPS traffic

22

Inbound SSH traffic

1024 - 65535

Inbound ephemeral traffic

0 - 65535

Outbound ephemeral traffic

Private subnets

  • AWS::EC2::Subnet

  • AWS::EC2::RouteTable

  • AWS::EC2::SubnetRouteTableAssociation

Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them.

Required DNS and load balancing components

Your DNS and load balancer configuration needs to use a public hosted zone and can use a private hosted zone similar to the one that the installation program uses if it provisions the cluster’s infrastructure. You must create a DNS entry that resolves to your load balancer. An entry for api.<cluster_name>.<domain> must point to the external load balancer, and an entry for api-int.<cluster_name>.<domain> must point to the internal load balancer.

The cluster also requires load balancers and listeners for port 6443, which are required for the Kubernetes API and its extensions, and port 22623, which are required for the Ignition config files for new machines. The targets will be the control plane nodes (also known as the master nodes). Port 6443 must be accessible to both clients external to the cluster and nodes within the cluster. Port 22623 must be accessible to nodes within the cluster.

Component AWS type Description

DNS

AWS::Route53::HostedZone

The hosted zone for your internal DNS.

etcd record sets

AWS::Route53::RecordSet

The registration records for etcd for your control plane machines.

Public load balancer

AWS::ElasticLoadBalancingV2::LoadBalancer

The load balancer for your public subnets.

External API server record

AWS::Route53::RecordSetGroup

Alias records for the external API server.

External listener

AWS::ElasticLoadBalancingV2::Listener

A listener on port 6443 for the external load balancer.

External target group

AWS::ElasticLoadBalancingV2::TargetGroup

The target group for the external load balancer.

Private load balancer

AWS::ElasticLoadBalancingV2::LoadBalancer

The load balancer for your private subnets.

Internal API server record

AWS::Route53::RecordSetGroup

Alias records for the internal API server.

Internal listener

AWS::ElasticLoadBalancingV2::Listener

A listener on port 22623 for the internal load balancer.

Internal target group

AWS::ElasticLoadBalancingV2::TargetGroup

The target group for the internal load balancer.

Internal listener

AWS::ElasticLoadBalancingV2::Listener

A listener on port 6443 for the internal load balancer.

Internal target group

AWS::ElasticLoadBalancingV2::TargetGroup

The target group for the internal load balancer.

Security groups

The control plane and worker machines require access to the following ports:

Group Type IP Protocol Port range

MasterSecurityGroup

AWS::EC2::SecurityGroup

icmp

0

tcp

22

tcp

6443

tcp

22623

WorkerSecurityGroup

AWS::EC2::SecurityGroup

icmp

0

tcp

22

BootstrapSecurityGroup

AWS::EC2::SecurityGroup

tcp

22

tcp

19531

Control plane Ingress

The control plane machines require the following Ingress groups. Each Ingress group is a AWS::EC2::SecurityGroupIngress resource.

Ingress group Description IP protocol Port range

MasterIngressEtcd

etcd

tcp

2379- 2380

MasterIngressVxlan

Vxlan packets

udp

4789

MasterIngressWorkerVxlan

Vxlan packets

udp

4789

MasterIngressInternal

Internal cluster communication and Kubernetes proxy metrics

tcp

9000 - 9999

MasterIngressWorkerInternal

Internal cluster communication

tcp

9000 - 9999

MasterIngressKube

Kubernetes kubelet, scheduler and controller manager

tcp

10250 - 10259

MasterIngressWorkerKube

Kubernetes kubelet, scheduler and controller manager

tcp

10250 - 10259

MasterIngressIngressServices

Kubernetes Ingress services

tcp

30000 - 32767

MasterIngressWorkerIngressServices

Kubernetes Ingress services

tcp

30000 - 32767

MasterIngressGeneve

Geneve packets

udp

6081

MasterIngressWorkerGeneve

Geneve packets

udp

6081

MasterIngressIpsecIke

IPsec IKE packets

udp

500

MasterIngressWorkerIpsecIke

IPsec IKE packets

udp

500

MasterIngressIpsecNat

IPsec NAT-T packets

udp

4500

MasterIngressWorkerIpsecNat

IPsec NAT-T packets

udp

4500

MasterIngressIpsecEsp

IPsec ESP packets

50

All

MasterIngressWorkerIpsecEsp

IPsec ESP packets

50

All

MasterIngressInternalUDP

Internal cluster communication

udp

9000 - 9999

MasterIngressWorkerInternalUDP

Internal cluster communication

udp

9000 - 9999

MasterIngressIngressServicesUDP

Kubernetes Ingress services

udp

30000 - 32767

MasterIngressWorkerIngressServicesUDP

Kubernetes Ingress services

udp

30000 - 32767

Worker Ingress

The worker machines require the following Ingress groups. Each Ingress group is a AWS::EC2::SecurityGroupIngress resource.

Ingress group Description IP protocol Port range

WorkerIngressVxlan

Vxlan packets

udp

4789

WorkerIngressWorkerVxlan

Vxlan packets

udp

4789

WorkerIngressInternal

Internal cluster communication

tcp

9000 - 9999

WorkerIngressWorkerInternal

Internal cluster communication

tcp

9000 - 9999

WorkerIngressKube

Kubernetes kubelet, scheduler, and controller manager

tcp

10250

WorkerIngressWorkerKube

Kubernetes kubelet, scheduler, and controller manager

tcp

10250

WorkerIngressIngressServices

Kubernetes Ingress services

tcp

30000 - 32767

WorkerIngressWorkerIngressServices

Kubernetes Ingress services

tcp

30000 - 32767

WorkerIngressGeneve

Geneve packets

udp

6081

WorkerIngressMasterGeneve

Geneve packets

udp

6081

WorkerIngressIpsecIke

IPsec IKE packets

udp

500

WorkerIngressMasterIpsecIke

IPsec IKE packets

udp

500

WorkerIngressIpsecNat

IPsec NAT-T packets

udp

4500

WorkerIngressMasterIpsecNat

IPsec NAT-T packets

udp

4500

WorkerIngressIpsecEsp

IPsec ESP packets

50

All

WorkerIngressMasterIpsecEsp

IPsec ESP packets

50

All

WorkerIngressInternalUDP

Internal cluster communication

udp

9000 - 9999

WorkerIngressMasterInternalUDP

Internal cluster communication

udp

9000 - 9999

WorkerIngressIngressServicesUDP

Kubernetes Ingress services

udp

30000 - 32767

WorkerIngressMasterIngressServicesUDP

Kubernetes Ingress services

udp

30000 - 32767

Roles and instance profiles

You must grant the machines permissions in AWS. The provided CloudFormation templates grant the machines Allow permissions for the following AWS::IAM::Role objects and provide a AWS::IAM::InstanceProfile for each set of roles. If you do not use the templates, you can grant the machines the following broad permissions or the following individual permissions.

Role Effect Action Resource

Master

Allow

ec2:*

*

Allow

elasticloadbalancing:*

*

Allow

iam:PassRole

*

Allow

s3:GetObject

*

Worker

Allow

ec2:Describe*

*

Bootstrap

Allow

ec2:Describe*

*

Allow

ec2:AttachVolume

*

Allow

ec2:DetachVolume

*

Cluster machines

You need AWS::EC2::Instance objects for the following machines:

  • A bootstrap machine. This machine is required during installation, but you can remove it after your cluster deploys.

  • Three control plane machines. The control plane machines are not governed by a machine set.

  • Compute machines. You must create at least two compute machines, which are also known as worker machines, during installation. These machines are not governed by a machine set.

Certificate signing requests management

Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them.

Supported AWS machine types

The following Amazon Web Services (AWS) instance types are supported with OpenShift Container Platform.

Instance types for machines
Instance type Bootstrap Control plane Compute

i3.large

x

m4.large

x

m4.xlarge

x

x

m4.2xlarge

x

x

m4.4xlarge

x

x

m4.10xlarge

x

x

m4.16xlarge

x

x

m5.large

x

m5.xlarge

x

x

m5.2xlarge

x

x

m5.4xlarge

x

x

m5.8xlarge

x

x

m5.12xlarge

x

x

m5.16xlarge

x

x

m5a.large

x

m5a.xlarge

x

x

m5a.2xlarge

x

x

m5a.4xlarge

x

x

m5a.8xlarge

x

x

m5a.12xlarge

x

x

m5a.16xlarge

x

x

m6i.xlarge

x

x

m6i.2xlarge

x

x

m6i.4xlarge

x

x

m6i.8xlarge

x

x

m6i.16xlarge

x

x

c4.2xlarge

x

x

c4.4xlarge

x

x

c4.8xlarge

x

x

c5.xlarge

x

c5.2xlarge

x

x

c5.4xlarge

x

x

c5.9xlarge

x

x

c5.12xlarge

x

x

c5.18xlarge

x

x

c5.24xlarge

x

x

c5a.xlarge

x

c5a.2xlarge

x

x

c5a.4xlarge

x

x

c5a.8xlarge

x

x

c5a.12xlarge

x

x

c5a.16xlarge

x

x

c5a.24xlarge

x

x

r4.large

x

r4.xlarge

x

x

r4.2xlarge

x

x

r4.4xlarge

x

x

r4.8xlarge

x

x

r4.16xlarge

x

x

r5.large

x

r5.xlarge

x

x

r5.2xlarge

x

x

r5.4xlarge

x

x

r5.8xlarge

x

x

r5.12xlarge

x

x

r5.16xlarge

x

x

r5.24xlarge

x

x

r5a.large

x

r5a.xlarge

x

x

r5a.2xlarge

x

x

r5a.4xlarge

x

x

r5a.8xlarge

x

x

r5a.12xlarge

x

x

r5a.16xlarge

x

x

r5a.24xlarge

x

x

t3.large

x

t3.xlarge

x

t3.2xlarge

x

t3a.large

x

t3a.xlarge

x

t3a.2xlarge

x

Required AWS permissions for the IAM user

Your IAM user must have the permission tag:GetResources in the region us-east-1 to delete the base cluster resources. As part of the AWS API requirement, the OpenShift Container Platform installation program performs various actions in this region.

When you attach the AdministratorAccess policy to the IAM user that you create in Amazon Web Services (AWS), you grant that user all of the required permissions. To deploy all components of an OpenShift Container Platform cluster, the IAM user requires the following permissions:

Required EC2 permissions for installation
  • ec2:AuthorizeSecurityGroupEgress

  • ec2:AuthorizeSecurityGroupIngress

  • ec2:CopyImage

  • ec2:CreateNetworkInterface

  • ec2:AttachNetworkInterface

  • ec2:CreateSecurityGroup

  • ec2:CreateTags

  • ec2:CreateVolume

  • ec2:DeleteSecurityGroup

  • ec2:DeleteSnapshot

  • ec2:DeleteTags

  • ec2:DeregisterImage

  • ec2:DescribeAccountAttributes

  • ec2:DescribeAddresses

  • ec2:DescribeAvailabilityZones

  • ec2:DescribeDhcpOptions

  • ec2:DescribeImages

  • ec2:DescribeInstanceAttribute

  • ec2:DescribeInstanceCreditSpecifications

  • ec2:DescribeInstances

  • ec2:DescribeInstanceTypes

  • ec2:DescribeInternetGateways

  • ec2:DescribeKeyPairs

  • ec2:DescribeNatGateways

  • ec2:DescribeNetworkAcls

  • ec2:DescribeNetworkInterfaces

  • ec2:DescribePrefixLists

  • ec2:DescribeRegions

  • ec2:DescribeRouteTables

  • ec2:DescribeSecurityGroups

  • ec2:DescribeSubnets

  • ec2:DescribeTags

  • ec2:DescribeVolumes

  • ec2:DescribeVpcAttribute

  • ec2:DescribeVpcClassicLink

  • ec2:DescribeVpcClassicLinkDnsSupport

  • ec2:DescribeVpcEndpoints

  • ec2:DescribeVpcs

  • ec2:GetEbsDefaultKmsKeyId

  • ec2:ModifyInstanceAttribute

  • ec2:ModifyNetworkInterfaceAttribute

  • ec2:RevokeSecurityGroupEgress

  • ec2:RevokeSecurityGroupIngress

  • ec2:RunInstances

  • ec2:TerminateInstances

Required permissions for creating network resources during installation
  • ec2:AllocateAddress

  • ec2:AssociateAddress

  • ec2:AssociateDhcpOptions

  • ec2:AssociateRouteTable

  • ec2:AttachInternetGateway

  • ec2:CreateDhcpOptions

  • ec2:CreateInternetGateway

  • ec2:CreateNatGateway

  • ec2:CreateRoute

  • ec2:CreateRouteTable

  • ec2:CreateSubnet

  • ec2:CreateVpc

  • ec2:CreateVpcEndpoint

  • ec2:ModifySubnetAttribute

  • ec2:ModifyVpcAttribute

If you use an existing VPC, your account does not require these permissions for creating network resources.

Required Elastic Load Balancing permissions (ELB) for installation
  • elasticloadbalancing:AddTags

  • elasticloadbalancing:ApplySecurityGroupsToLoadBalancer

  • elasticloadbalancing:AttachLoadBalancerToSubnets

  • elasticloadbalancing:ConfigureHealthCheck

  • elasticloadbalancing:CreateLoadBalancer

  • elasticloadbalancing:CreateLoadBalancerListeners

  • elasticloadbalancing:DeleteLoadBalancer

  • elasticloadbalancing:DeregisterInstancesFromLoadBalancer

  • elasticloadbalancing:DescribeInstanceHealth

  • elasticloadbalancing:DescribeLoadBalancerAttributes

  • elasticloadbalancing:DescribeLoadBalancers

  • elasticloadbalancing:DescribeTags

  • elasticloadbalancing:ModifyLoadBalancerAttributes

  • elasticloadbalancing:RegisterInstancesWithLoadBalancer

  • elasticloadbalancing:SetLoadBalancerPoliciesOfListener

Required Elastic Load Balancing permissions (ELBv2) for installation
  • elasticloadbalancing:AddTags

  • elasticloadbalancing:CreateListener

  • elasticloadbalancing:CreateLoadBalancer

  • elasticloadbalancing:CreateTargetGroup

  • elasticloadbalancing:DeleteLoadBalancer

  • elasticloadbalancing:DeregisterTargets

  • elasticloadbalancing:DescribeListeners

  • elasticloadbalancing:DescribeLoadBalancerAttributes

  • elasticloadbalancing:DescribeLoadBalancers

  • elasticloadbalancing:DescribeTargetGroupAttributes

  • elasticloadbalancing:DescribeTargetHealth

  • elasticloadbalancing:ModifyLoadBalancerAttributes

  • elasticloadbalancing:ModifyTargetGroup

  • elasticloadbalancing:ModifyTargetGroupAttributes

  • elasticloadbalancing:RegisterTargets

Required IAM permissions for installation
  • iam:AddRoleToInstanceProfile

  • iam:CreateInstanceProfile

  • iam:CreateRole

  • iam:DeleteInstanceProfile

  • iam:DeleteRole

  • iam:DeleteRolePolicy

  • iam:GetInstanceProfile

  • iam:GetRole

  • iam:GetRolePolicy

  • iam:GetUser

  • iam:ListInstanceProfilesForRole

  • iam:ListRoles

  • iam:ListUsers

  • iam:PassRole

  • iam:PutRolePolicy

  • iam:RemoveRoleFromInstanceProfile

  • iam:SimulatePrincipalPolicy

  • iam:TagRole

If you have not created an elastic load balancer (ELB) in your AWS account, the IAM user also requires the iam:CreateServiceLinkedRole permission.

Required Route 53 permissions for installation
  • route53:ChangeResourceRecordSets

  • route53:ChangeTagsForResource

  • route53:CreateHostedZone

  • route53:DeleteHostedZone

  • route53:GetChange

  • route53:GetHostedZone

  • route53:ListHostedZones

  • route53:ListHostedZonesByName

  • route53:ListResourceRecordSets

  • route53:ListTagsForResource

  • route53:UpdateHostedZoneComment

Required S3 permissions for installation
  • s3:CreateBucket

  • s3:DeleteBucket

  • s3:GetAccelerateConfiguration

  • s3:GetBucketAcl

  • s3:GetBucketCors

  • s3:GetBucketLocation

  • s3:GetBucketLogging

  • s3:GetBucketObjectLockConfiguration

  • s3:GetBucketReplication

  • s3:GetBucketRequestPayment

  • s3:GetBucketTagging

  • s3:GetBucketVersioning

  • s3:GetBucketWebsite

  • s3:GetEncryptionConfiguration

  • s3:GetLifecycleConfiguration

  • s3:GetReplicationConfiguration

  • s3:ListBucket

  • s3:PutBucketAcl

  • s3:PutBucketTagging

  • s3:PutEncryptionConfiguration

S3 permissions that cluster Operators require
  • s3:DeleteObject

  • s3:GetObject

  • s3:GetObjectAcl

  • s3:GetObjectTagging

  • s3:GetObjectVersion

  • s3:PutObject

  • s3:PutObjectAcl

  • s3:PutObjectTagging

Required permissions to delete base cluster resources
  • autoscaling:DescribeAutoScalingGroups

  • ec2:DeleteNetworkInterface

  • ec2:DeleteVolume

  • elasticloadbalancing:DeleteTargetGroup

  • elasticloadbalancing:DescribeTargetGroups

  • iam:DeleteAccessKey

  • iam:DeleteUser

  • iam:ListAttachedRolePolicies

  • iam:ListInstanceProfiles

  • iam:ListRolePolicies

  • iam:ListUserPolicies

  • s3:DeleteObject

  • s3:ListBucketVersions

  • tag:GetResources

Required permissions to delete network resources
  • ec2:DeleteDhcpOptions

  • ec2:DeleteInternetGateway

  • ec2:DeleteNatGateway

  • ec2:DeleteRoute

  • ec2:DeleteRouteTable

  • ec2:DeleteSubnet

  • ec2:DeleteVpc

  • ec2:DeleteVpcEndpoints

  • ec2:DetachInternetGateway

  • ec2:DisassociateRouteTable

  • ec2:ReleaseAddress

  • ec2:ReplaceRouteTableAssociation

If you use an existing VPC, your account does not require these permissions to delete network resources. Instead, your account only requires the tag:UntagResources permission to delete network resources.

Required permissions to delete a cluster with shared instance roles
  • iam:UntagRole

Additional IAM and S3 permissions that are required to create manifests
  • iam:DeleteAccessKey

  • iam:DeleteUser

  • iam:DeleteUserPolicy

  • iam:GetUserPolicy

  • iam:ListAccessKeys

  • iam:PutUserPolicy

  • iam:TagUser

  • iam:GetUserPolicy

  • iam:ListAccessKeys

  • s3:PutBucketPublicAccessBlock

  • s3:GetBucketPublicAccessBlock

  • s3:PutLifecycleConfiguration

  • s3:HeadBucket

  • s3:ListBucketMultipartUploads

  • s3:AbortMultipartUpload

If you are managing your cloud provider credentials with mint mode, the IAM user also requires the iam:CreateAccessKey and iam:CreateUser permissions.

Optional permissions for instance and quota checks for installation
  • ec2:DescribeInstanceTypeOfferings

  • servicequotas:ListAWSDefaultServiceQuotas

Obtaining an AWS Marketplace image

If you are deploying an OpenShift Container Platform cluster using an AWS Marketplace image, you must first subscribe through AWS. Subscribing to the offer provides you with the AMI ID that the installation program uses to deploy worker nodes.

Deploying an OpenShift Container Platform cluster using an AWS Marketplace image is not supported in secret regions.

Prerequisites
  • You have an AWS account to purchase the offer. This account does not have to be the same account that is used to install the cluster.

Procedure
  1. Complete the OpenShift Container Platform subscription from the AWS Marketplace.

  2. Record the AMI ID for your specific region. If you use the CloudFormation template to deploy your worker nodes, you must update the worker0.type.properties.ImageID parameter with this value.

Obtaining the installation program

Before you install OpenShift Container Platform, download the installation file on a local computer.

Prerequisites
  • You have a computer that runs Linux or macOS, with 500 MB of local disk space

Procedure
  1. Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.

  2. Select your infrastructure provider.

  3. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files.

    The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.

    Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.

  4. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:

    $ tar xvf openshift-install-linux.tar.gz
  5. Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.

Generating a key pair for cluster node SSH access

During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication.

After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.

If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes.

Do not skip this procedure in production environments, where disaster recovery and debugging is required.

You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.

Procedure
  1. If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:

    $ ssh-keygen -t ed25519 -N '' -f <path>/<file_name> (1)
    1 Specify the path and file name, such as ~/.ssh/id_ed25519, of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory.

    If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm.

  2. View the public SSH key:

    $ cat <path>/<file_name>.pub

    For example, run the following to view the ~/.ssh/id_ed25519.pub public key:

    $ cat ~/.ssh/id_ed25519.pub
  3. Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command.

    On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically.

    1. If the ssh-agent process is not already running for your local user, start it as a background task:

      $ eval "$(ssh-agent -s)"
      Example output
      Agent pid 31874

      If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.

  4. Add your SSH private key to the ssh-agent:

    $ ssh-add <path>/<file_name> (1)
    1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519
    Example output
    Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
Next steps
  • When you install OpenShift Container Platform, provide the SSH public key to the installation program. If you install a cluster on infrastructure that you provision, you must provide the key to the installation program.

Creating the installation files for AWS

To install OpenShift Container Platform on Amazon Web Services (AWS) using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You generate and customize the install-config.yaml file, Kubernetes manifests, and Ignition config files. You also have the option to first set up a separate var partition during the preparation phases of installation.

Optional: Creating a separate /var partition

It is recommended that disk partitioning for OpenShift Container Platform be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow.

OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var. For example:

  • /var/lib/containers: Holds container-related content that can grow as more images and containers are added to a system.

  • /var/lib/etcd: Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage.

  • /var: Holds data that you might want to keep separate for purposes such as auditing.

Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems.

Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation.

If you follow the steps to create a separate /var partition in this procedure, it is not necessary to create the Kubernetes manifest and Ignition config files again as described later in this section.

Procedure
  1. Create a directory to hold the OpenShift Container Platform installation files:

    $ mkdir $HOME/clusterconfig
  2. Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted:

    $ openshift-install create manifests --dir $HOME/clusterconfig
    Example output
    ? SSH Public Key ...
    INFO Credentials loaded from the "myprofile" profile in file "/home/myuser/.aws/credentials"
    INFO Consuming Install Config from target directory
    INFO Manifests created in: $HOME/clusterconfig/manifests and $HOME/clusterconfig/openshift
  3. Optional: Confirm that the installation program created manifests in the clusterconfig/openshift directory:

    $ ls $HOME/clusterconfig/openshift/
    Example output
    99_kubeadmin-password-secret.yaml
    99_openshift-cluster-api_master-machines-0.yaml
    99_openshift-cluster-api_master-machines-1.yaml
    99_openshift-cluster-api_master-machines-2.yaml
    ...
  4. Create a Butane config that configures the additional partition. For example, name the file $HOME/clusterconfig/98-var-partition.bu, change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition:

    variant: openshift
    version: 4.9.0
    metadata:
      labels:
        machineconfiguration.openshift.io/role: worker
      name: 98-var-partition
    storage:
      disks:
      - device: /dev/<device_name> (1)
        partitions:
        - label: var
          start_mib: <partition_start_offset> (2)
          size_mib: <partition_size> (3)
      filesystems:
        - device: /dev/disk/by-partlabel/var
          path: /var
          format: xfs
          mount_options: [defaults, prjquota] (4)
          with_mount_unit: true
    1 The storage device name of the disk that you want to partition.
    2 When adding a data partition to the boot disk, a minimum value of 25000 MiB (Mebibytes) is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition.
    3 The size of the data partition in mebibytes.
    4 The prjquota mount option must be enabled for filesystems used for container storage.

    When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name.

  5. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command:

    $ butane $HOME/clusterconfig/98-var-partition.bu -o $HOME/clusterconfig/openshift/98-var-partition.yaml
  6. Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories:

    $ openshift-install create ignition-configs --dir $HOME/clusterconfig
    $ ls $HOME/clusterconfig/
    auth  bootstrap.ign  master.ign  metadata.json  worker.ign

Now you can use the Ignition config files as input to the installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems.

Creating the installation configuration file

Generate and customize the installation configuration file that the installation program needs to deploy your cluster.

Prerequisites
  • You obtained the OpenShift Container Platform installation program for user-provisioned infrastructure and the pull secret for your cluster.

  • You checked that you are deploying your cluster to a region with an accompanying Red Hat Enterprise Linux CoreOS (RHCOS) AMI published by Red Hat. If you are deploying to a region that requires a custom AMI, such as an AWS GovCloud region, you must create the install-config.yaml file manually.

Procedure
  1. Create the install-config.yaml file.

    1. Change to the directory that contains the installation program and run the following command:

      $ ./openshift-install create install-config --dir <installation_directory> (1)
      1 For <installation_directory>, specify the directory name to store the files that the installation program creates.

      Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.

    2. At the prompts, provide the configuration details for your cloud:

      1. Optional: Select an SSH key to use to access your cluster machines.

        For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

      2. Select aws as the platform to target.

      3. If you do not have an AWS profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program.

        The AWS access key ID and secret access key are stored in ~/.aws/credentials in the home directory of the current user on the installation host. You are prompted for the credentials by the installation program if the credentials for the exported profile are not present in the file. Any credentials that you provide to the installation program are stored in the file.

      4. Select the AWS region to deploy the cluster to.

      5. Select the base domain for the Route 53 service that you configured for your cluster.

      6. Enter a descriptive name for your cluster.

      7. Paste the pull secret from the Red Hat OpenShift Cluster Manager.

  2. Optional: Back up the install-config.yaml file.

    The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now.

Additional resources

Configuring the cluster-wide proxy during installation

Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file.

Prerequisites
  • You have an existing install-config.yaml file.

  • You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object’s spec.noProxy field to bypass the proxy if necessary.

    The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr, networking.clusterNetwork[].cidr, and networking.serviceNetwork[] fields from your installation configuration.

    For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint (169.254.169.254).

  • If your cluster is on AWS, you added the ec2.<region>.amazonaws.com, elasticloadbalancing.<region>.amazonaws.com, and s3.<region>.amazonaws.com endpoints to your VPC endpoint. These endpoints are required to complete requests from the nodes to the AWS EC2 API. Because the proxy works on the container level, not the node level, you must route these requests to the AWS EC2 API through the AWS private network. Adding the public IP address of the EC2 API to your allowlist in your proxy server is not sufficient.

Procedure
  1. Edit your install-config.yaml file and add the proxy settings. For example:

    apiVersion: v1
    baseDomain: my.domain.com
    proxy:
      httpProxy: http://<username>:<pswd>@<ip>:<port> (1)
      httpsProxy: https://<username>:<pswd>@<ip>:<port> (2)
      noProxy: example.com (3)
    additionalTrustBundle: | (4)
        -----BEGIN CERTIFICATE-----
        <MY_TRUSTED_CA_CERT>
        -----END CERTIFICATE-----
    ...
    1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http.
    2 A proxy URL to use for creating HTTPS connections outside the cluster.
    3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com, but not y.com. Use * to bypass the proxy for all destinations.
    4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.

    The installation program does not support the proxy readinessEndpoints field.

  2. Save the file and reference it when installing OpenShift Container Platform.

The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec.

Only the Proxy object named cluster is supported, and no additional proxies can be created.

Creating the Kubernetes manifest and Ignition config files

Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines.

The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines.

  • The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information.

  • It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.

Prerequisites
  • You obtained the OpenShift Container Platform installation program.

  • You created the install-config.yaml installation configuration file.

Procedure
  1. Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster:

    $ ./openshift-install create manifests --dir <installation_directory> (1)
    1 For <installation_directory>, specify the installation directory that contains the install-config.yaml file you created.
  2. Remove the Kubernetes manifest files that define the control plane machines:

    $ rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml

    By removing these files, you prevent the cluster from automatically generating control plane machines.

  3. Remove the Kubernetes manifest files that define the worker machines:

    $ rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml

    Because you create and manage the worker machines yourself, you do not need to initialize these machines.

  4. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false. This setting prevents pods from being scheduled on the control plane machines:

    1. Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file.

    2. Locate the mastersSchedulable parameter and ensure that it is set to false.

    3. Save and exit the file.

  5. Optional: If you do not want the Ingress Operator to create DNS records on your behalf, remove the privateZone and publicZone sections from the <installation_directory>/manifests/cluster-dns-02-config.yml DNS configuration file:

    apiVersion: config.openshift.io/v1
    kind: DNS
    metadata:
      creationTimestamp: null
      name: cluster
    spec:
      baseDomain: example.openshift.com
      privateZone: (1)
        id: mycluster-100419-private-zone
      publicZone: (1)
        id: example.openshift.com
    status: {}
    1 Remove this section completely.

    If you do so, you must add ingress DNS records manually in a later step.

  6. To create the Ignition configuration files, run the following command from the directory that contains the installation program:

    $ ./openshift-install create ignition-configs --dir <installation_directory> (1)
    1 For <installation_directory>, specify the same installation directory.

    Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory:

    .
    ├── auth
    │   ├── kubeadmin-password
    │   └── kubeconfig
    ├── bootstrap.ign
    ├── master.ign
    ├── metadata.json
    └── worker.ign

Extracting the infrastructure name

The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in Amazon Web Services (AWS). The infrastructure name is also used to locate the appropriate AWS resources during an OpenShift Container Platform installation. The provided CloudFormation templates contain references to this infrastructure name, so you must extract it.

Prerequisites
  • You obtained the OpenShift Container Platform installation program and the pull secret for your cluster.

  • You generated the Ignition config files for your cluster.

  • You installed the jq package.

Procedure
  • To extract and view the infrastructure name from the Ignition config file metadata, run the following command:

    $ jq -r .infraID <installation_directory>/metadata.json (1)
    1 For <installation_directory>, specify the path to the directory that you stored the installation files in.
    Example output
    openshift-vw9j6 (1)
    
    1 The output of this command is your cluster name and a random string.

Creating a VPC in AWS

You must create a Virtual Private Cloud (VPC) in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to use. You can customize the VPC to meet your requirements, including VPN and route tables.

You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the VPC.

If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.

Prerequisites
  • You configured an AWS account.

  • You added your AWS keys and region to your local AWS profile by running aws configure.

  • You generated the Ignition config files for your cluster.

Procedure
  1. Create a JSON file that contains the parameter values that the template requires:

    [
      {
        "ParameterKey": "VpcCidr", (1)
        "ParameterValue": "10.0.0.0/16" (2)
      },
      {
        "ParameterKey": "AvailabilityZoneCount", (3)
        "ParameterValue": "1" (4)
      },
      {
        "ParameterKey": "SubnetBits", (5)
        "ParameterValue": "12" (6)
      }
    ]
    1 The CIDR block for the VPC.
    2 Specify a CIDR block in the format x.x.x.x/16-24.
    3 The number of availability zones to deploy the VPC in.
    4 Specify an integer between 1 and 3.
    5 The size of each subnet in each availability zone.
    6 Specify an integer between 5 and 13, where 5 is /27 and 13 is /19.
  2. Copy the template from the CloudFormation template for the VPC section of this topic and save it as a YAML file on your computer. This template describes the VPC that your cluster requires.

  3. Launch the CloudFormation template to create a stack of AWS resources that represent the VPC:

    You must enter the command on a single line.

    $ aws cloudformation create-stack --stack-name <name> (1)
         --template-body file://<template>.yaml (2)
         --parameters file://<parameters>.json (3)
    1 <name> is the name for the CloudFormation stack, such as cluster-vpc. You need the name of this stack if you remove the cluster.
    2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved.
    3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file.
    Example output
    arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f
  4. Confirm that the template components exist:

    $ aws cloudformation describe-stacks --stack-name <name>

    After the StackStatus displays CREATE_COMPLETE, the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster:

    VpcId

    The ID of your VPC.

    PublicSubnetIds

    The IDs of the new public subnets.

    PrivateSubnetIds

    The IDs of the new private subnets.

CloudFormation template for the VPC

You can use the following CloudFormation template to deploy the VPC that you need for your OpenShift Container Platform cluster.

CloudFormation template for the VPC
AWSTemplateFormatVersion: 2010-09-09
Description: Template for Best Practice VPC with 1-3 AZs

Parameters:
  VpcCidr:
    AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))$
    ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24.
    Default: 10.0.0.0/16
    Description: CIDR block for VPC.
    Type: String
  AvailabilityZoneCount:
    ConstraintDescription: "The number of availability zones. (Min: 1, Max: 3)"
    MinValue: 1
    MaxValue: 3
    Default: 1
    Description: "How many AZs to create VPC subnets for. (Min: 1, Max: 3)"
    Type: Number
  SubnetBits:
    ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27.
    MinValue: 5
    MaxValue: 13
    Default: 12
    Description: "Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)"
    Type: Number

Metadata:
  AWS::CloudFormation::Interface:
    ParameterGroups:
    - Label:
        default: "Network Configuration"
      Parameters:
      - VpcCidr
      - SubnetBits
    - Label:
        default: "Availability Zones"
      Parameters:
      - AvailabilityZoneCount
    ParameterLabels:
      AvailabilityZoneCount:
        default: "Availability Zone Count"
      VpcCidr:
        default: "VPC CIDR"
      SubnetBits:
        default: "Bits Per Subnet"

Conditions:
  DoAz3: !Equals [3, !Ref AvailabilityZoneCount]
  DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3]

Resources:
  VPC:
    Type: "AWS::EC2::VPC"
    Properties:
      EnableDnsSupport: "true"
      EnableDnsHostnames: "true"
      CidrBlock: !Ref VpcCidr
  PublicSubnet:
    Type: "AWS::EC2::Subnet"
    Properties:
      VpcId: !Ref VPC
      CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]]
      AvailabilityZone: !Select
      - 0
      - Fn::GetAZs: !Ref "AWS::Region"
  PublicSubnet2:
    Type: "AWS::EC2::Subnet"
    Condition: DoAz2
    Properties:
      VpcId: !Ref VPC
      CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]]
      AvailabilityZone: !Select
      - 1
      - Fn::GetAZs: !Ref "AWS::Region"
  PublicSubnet3:
    Type: "AWS::EC2::Subnet"
    Condition: DoAz3
    Properties:
      VpcId: !Ref VPC
      CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]]
      AvailabilityZone: !Select
      - 2
      - Fn::GetAZs: !Ref "AWS::Region"
  InternetGateway:
    Type: "AWS::EC2::InternetGateway"
  GatewayToInternet:
    Type: "AWS::EC2::VPCGatewayAttachment"
    Properties:
      VpcId: !Ref VPC
      InternetGatewayId: !Ref InternetGateway
  PublicRouteTable:
    Type: "AWS::EC2::RouteTable"
    Properties:
      VpcId: !Ref VPC
  PublicRoute:
    Type: "AWS::EC2::Route"
    DependsOn: GatewayToInternet
    Properties:
      RouteTableId: !Ref PublicRouteTable
      DestinationCidrBlock: 0.0.0.0/0
      GatewayId: !Ref InternetGateway
  PublicSubnetRouteTableAssociation:
    Type: "AWS::EC2::SubnetRouteTableAssociation"
    Properties:
      SubnetId: !Ref PublicSubnet
      RouteTableId: !Ref PublicRouteTable
  PublicSubnetRouteTableAssociation2:
    Type: "AWS::EC2::SubnetRouteTableAssociation"
    Condition: DoAz2
    Properties:
      SubnetId: !Ref PublicSubnet2
      RouteTableId: !Ref PublicRouteTable
  PublicSubnetRouteTableAssociation3:
    Condition: DoAz3
    Type: "AWS::EC2::SubnetRouteTableAssociation"
    Properties:
      SubnetId: !Ref PublicSubnet3
      RouteTableId: !Ref PublicRouteTable
  PrivateSubnet:
    Type: "AWS::EC2::Subnet"
    Properties:
      VpcId: !Ref VPC
      CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]]
      AvailabilityZone: !Select
      - 0
      - Fn::GetAZs: !Ref "AWS::Region"
  PrivateRouteTable:
    Type: "AWS::EC2::RouteTable"
    Properties:
      VpcId: !Ref VPC
  PrivateSubnetRouteTableAssociation:
    Type: "AWS::EC2::SubnetRouteTableAssociation"
    Properties:
      SubnetId: !Ref PrivateSubnet
      RouteTableId: !Ref PrivateRouteTable
  NAT:
    DependsOn:
    - GatewayToInternet
    Type: "AWS::EC2::NatGateway"
    Properties:
      AllocationId:
        "Fn::GetAtt":
        - EIP
        - AllocationId
      SubnetId: !Ref PublicSubnet
  EIP:
    Type: "AWS::EC2::EIP"
    Properties:
      Domain: vpc
  Route:
    Type: "AWS::EC2::Route"
    Properties:
      RouteTableId:
        Ref: PrivateRouteTable
      DestinationCidrBlock: 0.0.0.0/0
      NatGatewayId:
        Ref: NAT
  PrivateSubnet2:
    Type: "AWS::EC2::Subnet"
    Condition: DoAz2
    Properties:
      VpcId: !Ref VPC
      CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]]
      AvailabilityZone: !Select
      - 1
      - Fn::GetAZs: !Ref "AWS::Region"
  PrivateRouteTable2:
    Type: "AWS::EC2::RouteTable"
    Condition: DoAz2
    Properties:
      VpcId: !Ref VPC
  PrivateSubnetRouteTableAssociation2:
    Type: "AWS::EC2::SubnetRouteTableAssociation"
    Condition: DoAz2
    Properties:
      SubnetId: !Ref PrivateSubnet2
      RouteTableId: !Ref PrivateRouteTable2
  NAT2:
    DependsOn:
    - GatewayToInternet
    Type: "AWS::EC2::NatGateway"
    Condition: DoAz2
    Properties:
      AllocationId:
        "Fn::GetAtt":
        - EIP2
        - AllocationId
      SubnetId: !Ref PublicSubnet2
  EIP2:
    Type: "AWS::EC2::EIP"
    Condition: DoAz2
    Properties:
      Domain: vpc
  Route2:
    Type: "AWS::EC2::Route"
    Condition: DoAz2
    Properties:
      RouteTableId:
        Ref: PrivateRouteTable2
      DestinationCidrBlock: 0.0.0.0/0
      NatGatewayId:
        Ref: NAT2
  PrivateSubnet3:
    Type: "AWS::EC2::Subnet"
    Condition: DoAz3
    Properties:
      VpcId: !Ref VPC
      CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]]
      AvailabilityZone: !Select
      - 2
      - Fn::GetAZs: !Ref "AWS::Region"
  PrivateRouteTable3:
    Type: "AWS::EC2::RouteTable"
    Condition: DoAz3
    Properties:
      VpcId: !Ref VPC
  PrivateSubnetRouteTableAssociation3:
    Type: "AWS::EC2::SubnetRouteTableAssociation"
    Condition: DoAz3
    Properties:
      SubnetId: !Ref PrivateSubnet3
      RouteTableId: !Ref PrivateRouteTable3
  NAT3:
    DependsOn:
    - GatewayToInternet
    Type: "AWS::EC2::NatGateway"
    Condition: DoAz3
    Properties:
      AllocationId:
        "Fn::GetAtt":
        - EIP3
        - AllocationId
      SubnetId: !Ref PublicSubnet3
  EIP3:
    Type: "AWS::EC2::EIP"
    Condition: DoAz3
    Properties:
      Domain: vpc
  Route3:
    Type: "AWS::EC2::Route"
    Condition: DoAz3
    Properties:
      RouteTableId:
        Ref: PrivateRouteTable3
      DestinationCidrBlock: 0.0.0.0/0
      NatGatewayId:
        Ref: NAT3
  S3Endpoint:
    Type: AWS::EC2::VPCEndpoint
    Properties:
      PolicyDocument:
        Version: 2012-10-17
        Statement:
        - Effect: Allow
          Principal: '*'
          Action:
          - '*'
          Resource:
          - '*'
      RouteTableIds:
      - !Ref PublicRouteTable
      - !Ref PrivateRouteTable
      - !If [DoAz2, !Ref PrivateRouteTable2, !Ref "AWS::NoValue"]
      - !If [DoAz3, !Ref PrivateRouteTable3, !Ref "AWS::NoValue"]
      ServiceName: !Join
      - ''
      - - com.amazonaws.
        - !Ref 'AWS::Region'
        - .s3
      VpcId: !Ref VPC

Outputs:
  VpcId:
    Description: ID of the new VPC.
    Value: !Ref VPC
  PublicSubnetIds:
    Description: Subnet IDs of the public subnets.
    Value:
      !Join [
        ",",
        [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PublicSubnet3, !Ref "AWS::NoValue"]]
      ]
  PrivateSubnetIds:
    Description: Subnet IDs of the private subnets.
    Value:
      !Join [
        ",",
        [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PrivateSubnet3, !Ref "AWS::NoValue"]]
      ]
Additional resources

Creating networking and load balancing components in AWS

You must configure networking and classic or network load balancing in Amazon Web Services (AWS) that your OpenShift Container Platform cluster can use.

You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the networking and load balancing components that your OpenShift Container Platform cluster requires. The template also creates a hosted zone and subnet tags.

You can run the template multiple times within a single Virtual Private Cloud (VPC).

If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.

Prerequisites
  • You configured an AWS account.

  • You added your AWS keys and region to your local AWS profile by running aws configure.

  • You generated the Ignition config files for your cluster.

  • You created and configured a VPC and associated subnets in AWS.

Procedure
  1. Obtain the hosted zone ID for the Route 53 base domain that you specified in the install-config.yaml file for your cluster. You can obtain details about your hosted zone by running the following command:

    $ aws route53 list-hosted-zones-by-name --dns-name <route53_domain> (1)
    1 For the <route53_domain>, specify the Route 53 base domain that you used when you generated the install-config.yaml file for the cluster.
    Example output
    mycluster.example.com.	False	100
    HOSTEDZONES	65F8F38E-2268-B835-E15C-AB55336FCBFA	/hostedzone/Z21IXYZABCZ2A4	mycluster.example.com.	10

    In the example output, the hosted zone ID is Z21IXYZABCZ2A4.

  2. Create a JSON file that contains the parameter values that the template requires:

    [
      {
        "ParameterKey": "ClusterName", (1)
        "ParameterValue": "mycluster" (2)
      },
      {
        "ParameterKey": "InfrastructureName", (3)
        "ParameterValue": "mycluster-<random_string>" (4)
      },
      {
        "ParameterKey": "HostedZoneId", (5)
        "ParameterValue": "<random_string>" (6)
      },
      {
        "ParameterKey": "HostedZoneName", (7)
        "ParameterValue": "example.com" (8)
      },
      {
        "ParameterKey": "PublicSubnets", (9)
        "ParameterValue": "subnet-<random_string>" (10)
      },
      {
        "ParameterKey": "PrivateSubnets", (11)
        "ParameterValue": "subnet-<random_string>" (12)
      },
      {
        "ParameterKey": "VpcId", (13)
        "ParameterValue": "vpc-<random_string>" (14)
      }
    ]
    1 A short, representative cluster name to use for hostnames, etc.
    2 Specify the cluster name that you used when you generated the install-config.yaml file for the cluster.
    3 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster.
    4 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string>.
    5 The Route 53 public zone ID to register the targets with.
    6 Specify the Route 53 public zone ID, which as a format similar to Z21IXYZABCZ2A4. You can obtain this value from the AWS console.
    7 The Route 53 zone to register the targets with.
    8 Specify the Route 53 base domain that you used when you generated the install-config.yaml file for the cluster. Do not include the trailing period (.) that is displayed in the AWS console.
    9 The public subnets that you created for your VPC.
    10 Specify the PublicSubnetIds value from the output of the CloudFormation template for the VPC.
    11 The private subnets that you created for your VPC.
    12 Specify the PrivateSubnetIds value from the output of the CloudFormation template for the VPC.
    13 The VPC that you created for the cluster.
    14 Specify the VpcId value from the output of the CloudFormation template for the VPC.
  3. Copy the template from the CloudFormation template for the network and load balancers section of this topic and save it as a YAML file on your computer. This template describes the networking and load balancing objects that your cluster requires.

    If you are deploying your cluster to an AWS government or secret region, you must update the InternalApiServerRecord in the CloudFormation template to use CNAME records. Records of type ALIAS are not supported for AWS government regions.

  4. Launch the CloudFormation template to create a stack of AWS resources that provide the networking and load balancing components:

    You must enter the command on a single line.

    $ aws cloudformation create-stack --stack-name <name> (1)
         --template-body file://<template>.yaml (2)
         --parameters file://<parameters>.json (3)
         --capabilities CAPABILITY_NAMED_IAM (4)
    
    1 <name> is the name for the CloudFormation stack, such as cluster-dns. You need the name of this stack if you remove the cluster.
    2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved.
    3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file.
    4 You must explicitly declare the CAPABILITY_NAMED_IAM capability because the provided template creates some AWS::IAM::Role resources.
    Example output
    arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-dns/cd3e5de0-2fd4-11eb-5cf0-12be5c33a183
  5. Confirm that the template components exist:

    $ aws cloudformation describe-stacks --stack-name <name>

    After the StackStatus displays CREATE_COMPLETE, the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster:

    PrivateHostedZoneId

    Hosted zone ID for the private DNS.

    ExternalApiLoadBalancerName

    Full name of the external API load balancer.

    InternalApiLoadBalancerName

    Full name of the internal API load balancer.

    ApiServerDnsName

    Full hostname of the API server.

    RegisterNlbIpTargetsLambda

    Lambda ARN useful to help register/deregister IP targets for these load balancers.

    ExternalApiTargetGroupArn

    ARN of external API target group.

    InternalApiTargetGroupArn

    ARN of internal API target group.

    InternalServiceTargetGroupArn

    ARN of internal service target group.

CloudFormation template for the network and load balancers

You can use the following CloudFormation template to deploy the networking objects and load balancers that you need for your OpenShift Container Platform cluster.

CloudFormation template for the network and load balancers
AWSTemplateFormatVersion: 2010-09-09
Description: Template for OpenShift Cluster Network Elements (Route53 & LBs)

Parameters:
  ClusterName:
    AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})$
    MaxLength: 27
    MinLength: 1
    ConstraintDescription: Cluster name must be alphanumeric, start with a letter, and have a maximum of 27 characters.
    Description: A short, representative cluster name to use for host names and other identifying names.
    Type: String
  InfrastructureName:
    AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})$
    MaxLength: 27
    MinLength: 1
    ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters.
    Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster.
    Type: String
  HostedZoneId:
    Description: The Route53 public zone ID to register the targets with, such as Z21IXYZABCZ2A4.
    Type: String
  HostedZoneName:
    Description: The Route53 zone to register the targets with, such as example.com. Omit the trailing period.
    Type: String
    Default: "example.com"
  PublicSubnets:
    Description: The internet-facing subnets.
    Type: List<AWS::EC2::Subnet::Id>
  PrivateSubnets:
    Description: The internal subnets.
    Type: List<AWS::EC2::Subnet::Id>
  VpcId:
    Description: The VPC-scoped resources will belong to this VPC.
    Type: AWS::EC2::VPC::Id

Metadata:
  AWS::CloudFormation::Interface:
    ParameterGroups:
    - Label:
        default: "Cluster Information"
      Parameters:
      - ClusterName
      - InfrastructureName
    - Label:
        default: "Network Configuration"
      Parameters:
      - VpcId
      - PublicSubnets
      - PrivateSubnets
    - Label:
        default: "DNS"
      Parameters:
      - HostedZoneName
      - HostedZoneId
    ParameterLabels:
      ClusterName:
        default: "Cluster Name"
      InfrastructureName:
        default: "Infrastructure Name"
      VpcId:
        default: "VPC ID"
      PublicSubnets:
        default: "Public Subnets"
      PrivateSubnets:
        default: "Private Subnets"
      HostedZoneName:
        default: "Public Hosted Zone Name"
      HostedZoneId:
        default: "Public Hosted Zone ID"

Resources:
  ExtApiElb:
    Type: AWS::ElasticLoadBalancingV2::LoadBalancer
    Properties:
      Name: !Join ["-", [!Ref InfrastructureName, "ext"]]
      IpAddressType: ipv4
      Subnets: !Ref PublicSubnets
      Type: network

  IntApiElb:
    Type: AWS::ElasticLoadBalancingV2::LoadBalancer
    Properties:
      Name: !Join ["-", [!Ref InfrastructureName, "int"]]
      Scheme: internal
      IpAddressType: ipv4
      Subnets: !Ref PrivateSubnets
      Type: network

  IntDns:
    Type: "AWS::Route53::HostedZone"
    Properties:
      HostedZoneConfig:
        Comment: "Managed by CloudFormation"
      Name: !Join [".", [!Ref ClusterName, !Ref HostedZoneName]]
      HostedZoneTags:
      - Key: Name
        Value: !Join ["-", [!Ref InfrastructureName, "int"]]
      - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]]
        Value: "owned"
      VPCs:
      - VPCId: !Ref VpcId
        VPCRegion: !Ref "AWS::Region"

  ExternalApiServerRecord:
    Type: AWS::Route53::RecordSetGroup
    Properties:
      Comment: Alias record for the API server
      HostedZoneId: !Ref HostedZoneId
      RecordSets:
      - Name:
          !Join [
            ".",
            ["api", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]],
          ]
        Type: A
        AliasTarget:
          HostedZoneId: !GetAtt ExtApiElb.CanonicalHostedZoneID
          DNSName: !GetAtt ExtApiElb.DNSName

  InternalApiServerRecord:
    Type: AWS::Route53::RecordSetGroup
    Properties:
      Comment: Alias record for the API server
      HostedZoneId: !Ref IntDns
      RecordSets:
      - Name:
          !Join [
            ".",
            ["api", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]],
          ]
        Type: A
        AliasTarget:
          HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID
          DNSName: !GetAtt IntApiElb.DNSName
      - Name:
          !Join [
            ".",
            ["api-int", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]],
          ]
        Type: A
        AliasTarget:
          HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID
          DNSName: !GetAtt IntApiElb.DNSName

  ExternalApiListener:
    Type: AWS::ElasticLoadBalancingV2::Listener
    Properties:
      DefaultActions:
      - Type: forward
        TargetGroupArn:
          Ref: ExternalApiTargetGroup
      LoadBalancerArn:
        Ref: ExtApiElb
      Port: 6443
      Protocol: TCP

  ExternalApiTargetGroup:
    Type: AWS::ElasticLoadBalancingV2::TargetGroup
    Properties:
      HealthCheckIntervalSeconds: 10
      HealthCheckPath: "/readyz"
      HealthCheckPort: 6443
      HealthCheckProtocol: HTTPS
      HealthyThresholdCount: 2
      UnhealthyThresholdCount: 2
      Port: 6443
      Protocol: TCP
      TargetType: ip
      VpcId:
        Ref: VpcId
      TargetGroupAttributes:
      - Key: deregistration_delay.timeout_seconds
        Value: 60

  InternalApiListener:
    Type: AWS::ElasticLoadBalancingV2::Listener
    Properties:
      DefaultActions:
      - Type: forward
        TargetGroupArn:
          Ref: InternalApiTargetGroup
      LoadBalancerArn:
        Ref: IntApiElb
      Port: 6443
      Protocol: TCP

  InternalApiTargetGroup:
    Type: AWS::ElasticLoadBalancingV2::TargetGroup
    Properties:
      HealthCheckIntervalSeconds: 10
      HealthCheckPath: "/readyz"
      HealthCheckPort: 6443
      HealthCheckProtocol: HTTPS
      HealthyThresholdCount: 2
      UnhealthyThresholdCount: 2
      Port: 6443
      Protocol: TCP
      TargetType: ip
      VpcId:
        Ref: VpcId
      TargetGroupAttributes:
      - Key: deregistration_delay.timeout_seconds
        Value: 60

  InternalServiceInternalListener:
    Type: AWS::ElasticLoadBalancingV2::Listener
    Properties:
      DefaultActions:
      - Type: forward
        TargetGroupArn:
          Ref: InternalServiceTargetGroup
      LoadBalancerArn:
        Ref: IntApiElb
      Port: 22623
      Protocol: TCP

  InternalServiceTargetGroup:
    Type: AWS::ElasticLoadBalancingV2::TargetGroup
    Properties:
      HealthCheckIntervalSeconds: 10
      HealthCheckPath: "/healthz"
      HealthCheckPort: 22623
      HealthCheckProtocol: HTTPS
      HealthyThresholdCount: 2
      UnhealthyThresholdCount: 2
      Port: 22623
      Protocol: TCP
      TargetType: ip
      VpcId:
        Ref: VpcId
      TargetGroupAttributes:
      - Key: deregistration_delay.timeout_seconds
        Value: 60

  RegisterTargetLambdaIamRole:
    Type: AWS::IAM::Role
    Properties:
      RoleName: !Join ["-", [!Ref InfrastructureName, "nlb", "lambda", "role"]]
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
        - Effect: "Allow"
          Principal:
            Service:
            - "lambda.amazonaws.com"
          Action:
          - "sts:AssumeRole"
      Path: "/"
      Policies:
      - PolicyName: !Join ["-", [!Ref InfrastructureName, "master", "policy"]]
        PolicyDocument:
          Version: "2012-10-17"
          Statement:
          - Effect: "Allow"
            Action:
              [
                "elasticloadbalancing:RegisterTargets",
                "elasticloadbalancing:DeregisterTargets",
              ]
            Resource: !Ref InternalApiTargetGroup
          - Effect: "Allow"
            Action:
              [
                "elasticloadbalancing:RegisterTargets",
                "elasticloadbalancing:DeregisterTargets",
              ]
            Resource: !Ref InternalServiceTargetGroup
          - Effect: "Allow"
            Action:
              [
                "elasticloadbalancing:RegisterTargets",
                "elasticloadbalancing:DeregisterTargets",
              ]
            Resource: !Ref ExternalApiTargetGroup

  RegisterNlbIpTargets:
    Type: "AWS::Lambda::Function"
    Properties:
      Handler: "index.handler"
      Role:
        Fn::GetAtt:
        - "RegisterTargetLambdaIamRole"
        - "Arn"
      Code:
        ZipFile: |
          import json
          import boto3
          import cfnresponse
          def handler(event, context):
            elb = boto3.client('elbv2')
            if event['RequestType'] == 'Delete':
              elb.deregister_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}])
            elif event['RequestType'] == 'Create':
              elb.register_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}])
            responseData = {}
            cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['TargetArn']+event['ResourceProperties']['TargetIp'])
      Runtime: "python3.7"
      Timeout: 120

  RegisterSubnetTagsLambdaIamRole:
    Type: AWS::IAM::Role
    Properties:
      RoleName: !Join ["-", [!Ref InfrastructureName, "subnet-tags-lambda-role"]]
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
        - Effect: "Allow"
          Principal:
            Service:
            - "lambda.amazonaws.com"
          Action:
          - "sts:AssumeRole"
      Path: "/"
      Policies:
      - PolicyName: !Join ["-", [!Ref InfrastructureName, "subnet-tagging-policy"]]
        PolicyDocument:
          Version: "2012-10-17"
          Statement:
          - Effect: "Allow"
            Action:
              [
                "ec2:DeleteTags",
                "ec2:CreateTags"
              ]
            Resource: "arn:aws:ec2:*:*:subnet/*"
          - Effect: "Allow"
            Action:
              [
                "ec2:DescribeSubnets",
                "ec2:DescribeTags"
              ]
            Resource: "*"

  RegisterSubnetTags:
    Type: "AWS::Lambda::Function"
    Properties:
      Handler: "index.handler"
      Role:
        Fn::GetAtt:
        - "RegisterSubnetTagsLambdaIamRole"
        - "Arn"
      Code:
        ZipFile: |
          import json
          import boto3
          import cfnresponse
          def handler(event, context):
            ec2_client = boto3.client('ec2')
            if event['RequestType'] == 'Delete':
              for subnet_id in event['ResourceProperties']['Subnets']:
                ec2_client.delete_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName']}]);
            elif event['RequestType'] == 'Create':
              for subnet_id in event['ResourceProperties']['Subnets']:
                ec2_client.create_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName'], 'Value': 'shared'}]);
            responseData = {}
            cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['InfrastructureName']+event['ResourceProperties']['Subnets'][0])
      Runtime: "python3.7"
      Timeout: 120

  RegisterPublicSubnetTags:
    Type: Custom::SubnetRegister
    Properties:
      ServiceToken: !GetAtt RegisterSubnetTags.Arn
      InfrastructureName: !Ref InfrastructureName
      Subnets: !Ref PublicSubnets

  RegisterPrivateSubnetTags:
    Type: Custom::SubnetRegister
    Properties:
      ServiceToken: !GetAtt RegisterSubnetTags.Arn
      InfrastructureName: !Ref InfrastructureName
      Subnets: !Ref PrivateSubnets

Outputs:
  PrivateHostedZoneId:
    Description: Hosted zone ID for the private DNS, which is required for private records.
    Value: !Ref IntDns
  ExternalApiLoadBalancerName:
    Description: Full name of the external API load balancer.
    Value: !GetAtt ExtApiElb.LoadBalancerFullName
  InternalApiLoadBalancerName:
    Description: Full name of the internal API load balancer.
    Value: !GetAtt IntApiElb.LoadBalancerFullName
  ApiServerDnsName:
    Description: Full hostname of the API server, which is required for the Ignition config files.
    Value: !Join [".", ["api-int", !Ref ClusterName, !Ref HostedZoneName]]
  RegisterNlbIpTargetsLambda:
    Description: Lambda ARN useful to help register or deregister IP targets for these load balancers.
    Value: !GetAtt RegisterNlbIpTargets.Arn
  ExternalApiTargetGroupArn:
    Description: ARN of the external API target group.
    Value: !Ref ExternalApiTargetGroup
  InternalApiTargetGroupArn:
    Description: ARN of the internal API target group.
    Value: !Ref InternalApiTargetGroup
  InternalServiceTargetGroupArn:
    Description: ARN of the internal service target group.
    Value: !Ref InternalServiceTargetGroup

If you are deploying your cluster to an AWS government or secret region, you must update the InternalApiServerRecord to use CNAME records. Records of type ALIAS are not supported for AWS government regions. For example:

Type: CNAME
TTL: 10
ResourceRecords:
- !GetAtt IntApiElb.DNSName
Additional resources

Creating security group and roles in AWS

You must create security groups and roles in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to use.

You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the security groups and roles that your OpenShift Container Platform cluster requires.

If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.

Prerequisites
  • You configured an AWS account.

  • You added your AWS keys and region to your local AWS profile by running aws configure.

  • You generated the Ignition config files for your cluster.

  • You created and configured a VPC and associated subnets in AWS.

Procedure
  1. Create a JSON file that contains the parameter values that the template requires:

    [
      {
        "ParameterKey": "InfrastructureName", (1)
        "ParameterValue": "mycluster-<random_string>" (2)
      },
      {
        "ParameterKey": "VpcCidr", (3)
        "ParameterValue": "10.0.0.0/16" (4)
      },
      {
        "ParameterKey": "PrivateSubnets", (5)
        "ParameterValue": "subnet-<random_string>" (6)
      },
      {
        "ParameterKey": "VpcId", (7)
        "ParameterValue": "vpc-<random_string>" (8)
      }
    ]
    1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster.
    2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string>.
    3 The CIDR block for the VPC.
    4 Specify the CIDR block parameter that you used for the VPC that you defined in the form x.x.x.x/16-24.
    5 The private subnets that you created for your VPC.
    6 Specify the PrivateSubnetIds value from the output of the CloudFormation template for the VPC.
    7 The VPC that you created for the cluster.
    8 Specify the VpcId value from the output of the CloudFormation template for the VPC.
  2. Copy the template from the CloudFormation template for security objects section of this topic and save it as a YAML file on your computer. This template describes the security groups and roles that your cluster requires.

  3. Launch the CloudFormation template to create a stack of AWS resources that represent the security groups and roles:

    You must enter the command on a single line.

    $ aws cloudformation create-stack --stack-name <name> (1)
         --template-body file://<template>.yaml (2)
         --parameters file://<parameters>.json (3)
         --capabilities CAPABILITY_NAMED_IAM (4)
    
    1 <name> is the name for the CloudFormation stack, such as cluster-sec. You need the name of this stack if you remove the cluster.
    2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved.
    3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file.
    4 You must explicitly declare the CAPABILITY_NAMED_IAM capability because the provided template creates some AWS::IAM::Role and AWS::IAM::InstanceProfile resources.
    Example output
    arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-sec/03bd4210-2ed7-11eb-6d7a-13fc0b61e9db
  4. Confirm that the template components exist:

    $ aws cloudformation describe-stacks --stack-name <name>

    After the StackStatus displays CREATE_COMPLETE, the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster:

    MasterSecurityGroupId

    Master Security Group ID

    WorkerSecurityGroupId

    Worker Security Group ID

    MasterInstanceProfile

    Master IAM Instance Profile

    WorkerInstanceProfile

    Worker IAM Instance Profile

CloudFormation template for security objects

You can use the following CloudFormation template to deploy the security objects that you need for your OpenShift Container Platform cluster.

CloudFormation template for security objects
AWSTemplateFormatVersion: 2010-09-09
Description: Template for OpenShift Cluster Security Elements (Security Groups & IAM)

Parameters:
  InfrastructureName:
    AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})$
    MaxLength: 27
    MinLength: 1
    ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters.
    Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster.
    Type: String
  VpcCidr:
    AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))$
    ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24.
    Default: 10.0.0.0/16
    Description: CIDR block for VPC.
    Type: String
  VpcId:
    Description: The VPC-scoped resources will belong to this VPC.
    Type: AWS::EC2::VPC::Id
  PrivateSubnets:
    Description: The internal subnets.
    Type: List<AWS::EC2::Subnet::Id>

Metadata:
  AWS::CloudFormation::Interface:
    ParameterGroups:
    - Label:
        default: "Cluster Information"
      Parameters:
      - InfrastructureName
    - Label:
        default: "Network Configuration"
      Parameters:
      - VpcId
      - VpcCidr
      - PrivateSubnets
    ParameterLabels:
      InfrastructureName:
        default: "Infrastructure Name"
      VpcId:
        default: "VPC ID"
      VpcCidr:
        default: "VPC CIDR"
      PrivateSubnets:
        default: "Private Subnets"

Resources:
  MasterSecurityGroup:
    Type: AWS::EC2::SecurityGroup
    Properties:
      GroupDescription: Cluster Master Security Group
      SecurityGroupIngress:
      - IpProtocol: icmp
        FromPort: 0
        ToPort: 0
        CidrIp: !Ref VpcCidr
      - IpProtocol: tcp
        FromPort: 22
        ToPort: 22
        CidrIp: !Ref VpcCidr
      - IpProtocol: tcp
        ToPort: 6443
        FromPort: 6443
        CidrIp: !Ref VpcCidr
      - IpProtocol: tcp
        FromPort: 22623
        ToPort: 22623
        CidrIp: !Ref VpcCidr
      VpcId: !Ref VpcId

  WorkerSecurityGroup:
    Type: AWS::EC2::SecurityGroup
    Properties:
      GroupDescription: Cluster Worker Security Group
      SecurityGroupIngress:
      - IpProtocol: icmp
        FromPort: 0
        ToPort: 0
        CidrIp: !Ref VpcCidr
      - IpProtocol: tcp
        FromPort: 22
        ToPort: 22
        CidrIp: !Ref VpcCidr
      VpcId: !Ref VpcId

  MasterIngressEtcd:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt MasterSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
      Description: etcd
      FromPort: 2379
      ToPort: 2380
      IpProtocol: tcp

  MasterIngressVxlan:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt MasterSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
      Description: Vxlan packets
      FromPort: 4789
      ToPort: 4789
      IpProtocol: udp

  MasterIngressWorkerVxlan:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt MasterSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
      Description: Vxlan packets
      FromPort: 4789
      ToPort: 4789
      IpProtocol: udp

  MasterIngressGeneve:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt MasterSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
      Description: Geneve packets
      FromPort: 6081
      ToPort: 6081
      IpProtocol: udp

  MasterIngressWorkerGeneve:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt MasterSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
      Description: Geneve packets
      FromPort: 6081
      ToPort: 6081
      IpProtocol: udp

  MasterIngressIpsecIke:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt MasterSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
      Description: IPsec IKE packets
      FromPort: 500
      ToPort: 500
      IpProtocol: udp

  MasterIngressIpsecNat:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt MasterSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
      Description: IPsec NAT-T packets
      FromPort: 4500
      ToPort: 4500
      IpProtocol: udp

  MasterIngressIpsecEsp:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt MasterSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
      Description: IPsec ESP packets
      IpProtocol: 50

  MasterIngressWorkerIpsecIke:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt MasterSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
      Description: IPsec IKE packets
      FromPort: 500
      ToPort: 500
      IpProtocol: udp

  MasterIngressWorkerIpsecNat:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt MasterSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
      Description: IPsec NAT-T packets
      FromPort: 4500
      ToPort: 4500
      IpProtocol: udp

  MasterIngressWorkerIpsecEsp:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt MasterSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
      Description: IPsec ESP packets
      IpProtocol: 50

  MasterIngressInternal:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt MasterSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
      Description: Internal cluster communication
      FromPort: 9000
      ToPort: 9999
      IpProtocol: tcp

  MasterIngressWorkerInternal:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt MasterSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
      Description: Internal cluster communication
      FromPort: 9000
      ToPort: 9999
      IpProtocol: tcp

  MasterIngressInternalUDP:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt MasterSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
      Description: Internal cluster communication
      FromPort: 9000
      ToPort: 9999
      IpProtocol: udp

  MasterIngressWorkerInternalUDP:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt MasterSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
      Description: Internal cluster communication
      FromPort: 9000
      ToPort: 9999
      IpProtocol: udp

  MasterIngressKube:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt MasterSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
      Description: Kubernetes kubelet, scheduler and controller manager
      FromPort: 10250
      ToPort: 10259
      IpProtocol: tcp

  MasterIngressWorkerKube:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt MasterSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
      Description: Kubernetes kubelet, scheduler and controller manager
      FromPort: 10250
      ToPort: 10259
      IpProtocol: tcp

  MasterIngressIngressServices:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt MasterSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
      Description: Kubernetes ingress services
      FromPort: 30000
      ToPort: 32767
      IpProtocol: tcp

  MasterIngressWorkerIngressServices:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt MasterSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
      Description: Kubernetes ingress services
      FromPort: 30000
      ToPort: 32767
      IpProtocol: tcp

  MasterIngressIngressServicesUDP:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt MasterSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
      Description: Kubernetes ingress services
      FromPort: 30000
      ToPort: 32767
      IpProtocol: udp

  MasterIngressWorkerIngressServicesUDP:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt MasterSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
      Description: Kubernetes ingress services
      FromPort: 30000
      ToPort: 32767
      IpProtocol: udp

  WorkerIngressVxlan:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt WorkerSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
      Description: Vxlan packets
      FromPort: 4789
      ToPort: 4789
      IpProtocol: udp

  WorkerIngressMasterVxlan:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt WorkerSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
      Description: Vxlan packets
      FromPort: 4789
      ToPort: 4789
      IpProtocol: udp

  WorkerIngressGeneve:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt WorkerSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
      Description: Geneve packets
      FromPort: 6081
      ToPort: 6081
      IpProtocol: udp

  WorkerIngressMasterGeneve:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt WorkerSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
      Description: Geneve packets
      FromPort: 6081
      ToPort: 6081
      IpProtocol: udp

  WorkerIngressIpsecIke:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt WorkerSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
      Description: IPsec IKE packets
      FromPort: 500
      ToPort: 500
      IpProtocol: udp

  WorkerIngressIpsecNat:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt WorkerSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
      Description: IPsec NAT-T packets
      FromPort: 4500
      ToPort: 4500
      IpProtocol: udp

  WorkerIngressIpsecEsp:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt WorkerSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
      Description: IPsec ESP packets
      IpProtocol: 50

  WorkerIngressMasterIpsecIke:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt WorkerSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
      Description: IPsec IKE packets
      FromPort: 500
      ToPort: 500
      IpProtocol: udp

  WorkerIngressMasterIpsecNat:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt WorkerSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
      Description: IPsec NAT-T packets
      FromPort: 4500
      ToPort: 4500
      IpProtocol: udp

  WorkerIngressMasterIpsecEsp:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt WorkerSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
      Description: IPsec ESP packets
      IpProtocol: 50

  WorkerIngressInternal:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt WorkerSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
      Description: Internal cluster communication
      FromPort: 9000
      ToPort: 9999
      IpProtocol: tcp

  WorkerIngressMasterInternal:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt WorkerSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
      Description: Internal cluster communication
      FromPort: 9000
      ToPort: 9999
      IpProtocol: tcp

  WorkerIngressInternalUDP:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt WorkerSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
      Description: Internal cluster communication
      FromPort: 9000
      ToPort: 9999
      IpProtocol: udp

  WorkerIngressMasterInternalUDP:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt WorkerSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
      Description: Internal cluster communication
      FromPort: 9000
      ToPort: 9999
      IpProtocol: udp

  WorkerIngressKube:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt WorkerSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
      Description: Kubernetes secure kubelet port
      FromPort: 10250
      ToPort: 10250
      IpProtocol: tcp

  WorkerIngressWorkerKube:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt WorkerSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
      Description: Internal Kubernetes communication
      FromPort: 10250
      ToPort: 10250
      IpProtocol: tcp

  WorkerIngressIngressServices:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt WorkerSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
      Description: Kubernetes ingress services
      FromPort: 30000
      ToPort: 32767
      IpProtocol: tcp

  WorkerIngressMasterIngressServices:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt WorkerSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
      Description: Kubernetes ingress services
      FromPort: 30000
      ToPort: 32767
      IpProtocol: tcp

  WorkerIngressIngressServicesUDP:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt WorkerSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
      Description: Kubernetes ingress services
      FromPort: 30000
      ToPort: 32767
      IpProtocol: udp

  WorkerIngressMasterIngressServicesUDP:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: !GetAtt WorkerSecurityGroup.GroupId
      SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
      Description: Kubernetes ingress services
      FromPort: 30000
      ToPort: 32767
      IpProtocol: udp

  MasterIamRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
        - Effect: "Allow"
          Principal:
            Service:
            - "ec2.amazonaws.com"
          Action:
          - "sts:AssumeRole"
      Policies:
      - PolicyName: !Join ["-", [!Ref InfrastructureName, "master", "policy"]]
        PolicyDocument:
          Version: "2012-10-17"
          Statement:
          - Effect: "Allow"
            Action:
            - "ec2:AttachVolume"
            - "ec2:AuthorizeSecurityGroupIngress"
            - "ec2:CreateSecurityGroup"
            - "ec2:CreateTags"
            - "ec2:CreateVolume"
            - "ec2:DeleteSecurityGroup"
            - "ec2:DeleteVolume"
            - "ec2:Describe*"
            - "ec2:DetachVolume"
            - "ec2:ModifyInstanceAttribute"
            - "ec2:ModifyVolume"
            - "ec2:RevokeSecurityGroupIngress"
            - "elasticloadbalancing:AddTags"
            - "elasticloadbalancing:AttachLoadBalancerToSubnets"
            - "elasticloadbalancing:ApplySecurityGroupsToLoadBalancer"
            - "elasticloadbalancing:CreateListener"
            - "elasticloadbalancing:CreateLoadBalancer"
            - "elasticloadbalancing:CreateLoadBalancerPolicy"
            - "elasticloadbalancing:CreateLoadBalancerListeners"
            - "elasticloadbalancing:CreateTargetGroup"
            - "elasticloadbalancing:ConfigureHealthCheck"
            - "elasticloadbalancing:DeleteListener"
            - "elasticloadbalancing:DeleteLoadBalancer"
            - "elasticloadbalancing:DeleteLoadBalancerListeners"
            - "elasticloadbalancing:DeleteTargetGroup"
            - "elasticloadbalancing:DeregisterInstancesFromLoadBalancer"
            - "elasticloadbalancing:DeregisterTargets"
            - "elasticloadbalancing:Describe*"
            - "elasticloadbalancing:DetachLoadBalancerFromSubnets"
            - "elasticloadbalancing:ModifyListener"
            - "elasticloadbalancing:ModifyLoadBalancerAttributes"
            - "elasticloadbalancing:ModifyTargetGroup"
            - "elasticloadbalancing:ModifyTargetGroupAttributes"
            - "elasticloadbalancing:RegisterInstancesWithLoadBalancer"
            - "elasticloadbalancing:RegisterTargets"
            - "elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer"
            - "elasticloadbalancing:SetLoadBalancerPoliciesOfListener"
            - "kms:DescribeKey"
            Resource: "*"

  MasterInstanceProfile:
    Type: "AWS::IAM::InstanceProfile"
    Properties:
      Roles:
      - Ref: "MasterIamRole"

  WorkerIamRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
        - Effect: "Allow"
          Principal:
            Service:
            - "ec2.amazonaws.com"
          Action:
          - "sts:AssumeRole"
      Policies:
      - PolicyName: !Join ["-", [!Ref InfrastructureName, "worker", "policy"]]
        PolicyDocument:
          Version: "2012-10-17"
          Statement:
          - Effect: "Allow"
            Action:
            - "ec2:DescribeInstances"
            - "ec2:DescribeRegions"
            Resource: "*"

  WorkerInstanceProfile:
    Type: "AWS::IAM::InstanceProfile"
    Properties:
      Roles:
      - Ref: "WorkerIamRole"

Outputs:
  MasterSecurityGroupId:
    Description: Master Security Group ID
    Value: !GetAtt MasterSecurityGroup.GroupId

  WorkerSecurityGroupId:
    Description: Worker Security Group ID
    Value: !GetAtt WorkerSecurityGroup.GroupId

  MasterInstanceProfile:
    Description: Master IAM Instance Profile
    Value: !Ref MasterInstanceProfile

  WorkerInstanceProfile:
    Description: Worker IAM Instance Profile
    Value: !Ref WorkerInstanceProfile
Additional resources

Accessing RHCOS AMIs with stream metadata

In OpenShift Container Platform, stream metadata provides standardized metadata about RHCOS in the JSON format and injects the metadata into the cluster. Stream metadata is a stable format that supports multiple architectures and is intended to be self-documenting for maintaining automation.

You can use the coreos print-stream-json sub-command of openshift-install to access information about the boot images in the stream metadata format. This command provides a method for printing stream metadata in a scriptable, machine-readable format.

For user-provisioned installations, the openshift-install binary contains references to the version of RHCOS boot images that are tested for use with OpenShift Container Platform, such as the AWS AMI.

Procedure

To parse the stream metadata, use one of the following methods:

  • From a Go program, use the official stream-metadata-go library at https://github.com/coreos/stream-metadata-go. You can also view example code in the library.

  • From another programming language, such as Python or Ruby, use the JSON library of your preferred programming language.

  • From a command-line utility that handles JSON data, such as jq:

    • Print the current x86_64 AMI for an AWS region, such as us-west-1:

      $ openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.images.aws.regions["us-west-1"].image'
      Example output
      ami-0d3e625f84626bbda

      The output of this command is the AWS AMI ID for the us-west-1 region. The AMI must belong to the same region as the cluster.

RHCOS AMIs for the AWS infrastructure

Red Hat provides Red Hat Enterprise Linux CoreOS (RHCOS) AMIs that are valid for the various AWS regions that you can manually specify for your OpenShift Container Platform nodes.

By importing your own AMI, you can also install to regions that do not have a published RHCOS AMI.

Table 1. RHCOS AMIs
AWS zone AWS AMI

af-south-1

ami-0ce5aa99b7d576c79

ap-east-1

ami-0f6debc614042ce76

ap-northeast-1

ami-0423a1bf292f34dc3

ap-northeast-2

ami-0889161041cb9d77f

ap-northeast-3

ami-00564b0d6cbb676b1

ap-south-1

ami-0650f4166d12ccead

ap-southeast-1

ami-0b09ad848356811c7

ap-southeast-2

ami-013484d0474ab5860

ca-central-1

ami-03291c3e2b74c32b9

eu-central-1

ami-0510f6f15c25b29d4

eu-north-1

ami-03a3119ba25eb55b1

eu-south-1

ami-04f719435625c1313

eu-west-1

ami-08e20744bd1c89c8e

eu-west-2

ami-0c190f5d05b071c7a

eu-west-3

ami-0eb0bf894fdf1d416

me-south-1

ami-073928aa740f738bd

sa-east-1

ami-01242f1bac18cc0fd

us-east-1

ami-05ed2cc6e70392ff9

us-east-2

ami-00b3a5054da356288

us-west-1

ami-021f626622b5238f3

us-west-2

ami-0c9fd8b47bfd717e8

AWS regions without a published RHCOS AMI

You can deploy an OpenShift Container Platform cluster to Amazon Web Services (AWS) regions without native support for a Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) or the AWS software development kit (SDK). If a published AMI is not available for an AWS region, you can upload a custom AMI prior to installing the cluster. This is required if you are deploying your cluster to an AWS government or secret region. AWS government and secret regions are supported by the AWS SDK.

If you are deploying to a region not supported by the AWS SDK and you do not specify a custom AMI, the installation program copies the us-east-1 AMI to the user account automatically. Then the installation program creates the control plane machines with encrypted EBS volumes using the default or user-specified Key Management Service (KMS) key. This allows the AMI to follow the same process workflow as published RHCOS AMIs.

A region without native support for an RHCOS AMI is not available to select from the terminal during cluster creation because it is not published. However, you can install to this region by configuring the custom AMI in the install-config.yaml file.

Uploading a custom RHCOS AMI in AWS

If you are deploying to a custom Amazon Web Services (AWS) region, you must upload a custom Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) that belongs to that region.

Prerequisites
  • You configured an AWS account.

  • You created an Amazon S3 bucket with the required IAM service role.

  • You uploaded your RHCOS VMDK file to Amazon S3. The RHCOS VMDK file must be the highest version that is less than or equal to the OpenShift Container Platform version you are installing.

  • You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer.

Procedure
  1. Export your AWS profile as an environment variable:

    $ export AWS_PROFILE=<aws_profile> (1)
    1 The AWS profile name that holds your AWS credentials, like govcloud.
  2. Export the region to associate with your custom AMI as an environment variable:

    $ export AWS_DEFAULT_REGION=<aws_region> (1)
    1 The AWS region, like us-gov-east-1.
  3. Export the version of RHCOS you uploaded to Amazon S3 as an environment variable:

    $ export RHCOS_VERSION=<version> (1)
    1 The RHCOS VMDK version, like 4.8.0.
  4. Export the Amazon S3 bucket name as an environment variable:

    $ export VMIMPORT_BUCKET_NAME=<s3_bucket_name>
  5. Create the containers.json file and define your RHCOS VMDK file:

    $ cat <<EOF > containers.json
    {
       "Description": "rhcos-${RHCOS_VERSION}-x86_64-aws.x86_64",
       "Format": "vmdk",
       "UserBucket": {
          "S3Bucket": "${VMIMPORT_BUCKET_NAME}",
          "S3Key": "rhcos-${RHCOS_VERSION}-x86_64-aws.x86_64.vmdk"
       }
    }
    EOF
  6. Import the RHCOS disk as an Amazon EBS snapshot:

    $ aws ec2 import-snapshot --region ${AWS_DEFAULT_REGION} \
         --description "<description>" \ (1)
         --disk-container "file://<file_path>/containers.json" (2)
    
    1 The description of your RHCOS disk being imported, like rhcos-${RHCOS_VERSION}-x86_64-aws.x86_64.
    2 The file path to the JSON file describing your RHCOS disk. The JSON file should contain your Amazon S3 bucket name and key.
  7. Check the status of the image import:

    $ watch -n 5 aws ec2 describe-import-snapshot-tasks --region ${AWS_DEFAULT_REGION}
    Example output
    {
        "ImportSnapshotTasks": [
            {
                "Description": "rhcos-4.7.0-x86_64-aws.x86_64",
                "ImportTaskId": "import-snap-fh6i8uil",
                "SnapshotTaskDetail": {
                    "Description": "rhcos-4.7.0-x86_64-aws.x86_64",
                    "DiskImageSize": 819056640.0,
                    "Format": "VMDK",
                    "SnapshotId": "snap-06331325870076318",
                    "Status": "completed",
                    "UserBucket": {
                        "S3Bucket": "external-images",
                        "S3Key": "rhcos-4.7.0-x86_64-aws.x86_64.vmdk"
                    }
                }
            }
        ]
    }

    Copy the SnapshotId to register the image.

  8. Create a custom RHCOS AMI from the RHCOS snapshot:

    $ aws ec2 register-image \
       --region ${AWS_DEFAULT_REGION} \
       --architecture x86_64 \ (1)
       --description "rhcos-${RHCOS_VERSION}-x86_64-aws.x86_64" \ (2)
       --ena-support \
       --name "rhcos-${RHCOS_VERSION}-x86_64-aws.x86_64" \ (3)
       --virtualization-type hvm \
       --root-device-name '/dev/xvda' \
       --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}' (4)
    
    1 The RHCOS VMDK architecture type, like x86_64, s390x, or ppc64le.
    2 The Description from the imported snapshot.
    3 The name of the RHCOS AMI.
    4 The SnapshotID from the imported snapshot.

To learn more about these APIs, see the AWS documentation for importing snapshots and creating EBS-backed AMIs.

Creating the bootstrap node in AWS

You must create the bootstrap node in Amazon Web Services (AWS) to use during OpenShift Container Platform cluster initialization.

You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the bootstrap node that your OpenShift Container Platform installation requires.

If you do not use the provided CloudFormation template to create your bootstrap node, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.

Prerequisites
  • You configured an AWS account.

  • You added your AWS keys and region to your local AWS profile by running aws configure.

  • You generated the Ignition config files for your cluster.

  • You created and configured a VPC and associated subnets in AWS.

  • You created and configured DNS, load balancers, and listeners in AWS.

  • You created the security groups and roles required for your cluster in AWS.

Procedure
  1. Provide a location to serve the bootstrap.ign Ignition config file to your cluster. This file is located in your installation directory. One way to do this is to create an S3 bucket in your cluster’s region and upload the Ignition config file to it.

    The provided CloudFormation Template assumes that the Ignition config files for your cluster are served from an S3 bucket. If you choose to serve the files from another location, you must modify the templates.

    If you are deploying to a region that has endpoints that differ from the AWS SDK, or you are providing your own custom endpoints, you must use a presigned URL for your S3 bucket instead of the s3:// schema.

    The bootstrap Ignition config file does contain secrets, like X.509 keys. The following steps provide basic security for the S3 bucket. To provide additional security, you can enable an S3 bucket policy to allow only certain users, such as the OpenShift IAM user, to access objects that the bucket contains. You can avoid S3 entirely and serve your bootstrap Ignition config file from any address that the bootstrap machine can reach.

    1. Create the bucket:

      $ aws s3 mb s3://<cluster-name>-infra (1)
      1 <cluster-name>-infra is the bucket name. When creating the install-config.yaml file, replace <cluster-name> with the name specified for the cluster.
    2. Upload the bootstrap.ign Ignition config file to the bucket:

      $ aws s3 cp <installation_directory>/bootstrap.ign s3://<cluster-name>-infra/bootstrap.ign (1)
      1 For <installation_directory>, specify the path to the directory that you stored the installation files in.
    3. Verify that the file uploaded:

      $ aws s3 ls s3://<cluster-name>-infra/
      Example output
      2019-04-03 16:15:16     314878 bootstrap.ign
  2. Create a JSON file that contains the parameter values that the template requires:

    [
      {
        "ParameterKey": "InfrastructureName", (1)
        "ParameterValue": "mycluster-<random_string>" (2)
      },
      {
        "ParameterKey": "RhcosAmi", (3)
        "ParameterValue": "ami-<random_string>" (4)
      },
      {
        "ParameterKey": "AllowedBootstrapSshCidr", (5)
        "ParameterValue": "0.0.0.0/0" (6)
      },
      {
        "ParameterKey": "PublicSubnet", (7)
        "ParameterValue": "subnet-<random_string>" (8)
      },
      {
        "ParameterKey": "MasterSecurityGroupId", (9)
        "ParameterValue": "sg-<random_string>" (10)
      },
      {
        "ParameterKey": "VpcId", (11)
        "ParameterValue": "vpc-<random_string>" (12)
      },
      {
        "ParameterKey": "BootstrapIgnitionLocation", (13)
        "ParameterValue": "s3://<bucket_name>/bootstrap.ign" (14)
      },
      {
        "ParameterKey": "AutoRegisterELB", (15)
        "ParameterValue": "yes" (16)
      },
      {
        "ParameterKey": "RegisterNlbIpTargetsLambdaArn", (17)
        "ParameterValue": "arn:aws:lambda:<region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>" (18)
      },
      {
        "ParameterKey": "ExternalApiTargetGroupArn", (19)
        "ParameterValue": "arn:aws:elasticloadbalancing:<region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>" (20)
      },
      {
        "ParameterKey": "InternalApiTargetGroupArn", (21)
        "ParameterValue": "arn:aws:elasticloadbalancing:<region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" (22)
      },
      {
        "ParameterKey": "InternalServiceTargetGroupArn", (23)
        "ParameterValue": "arn:aws:elasticloadbalancing:<region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" (24)
      }
    ]
    
    1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster.
    2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string>.
    3 Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the bootstrap node.
    4 Specify a valid AWS::EC2::Image::Id value.
    5 CIDR block to allow SSH access to the bootstrap node.
    6 Specify a CIDR block in the format x.x.x.x/16-24.
    7 The public subnet that is associated with your VPC to launch the bootstrap node into.
    8 Specify the PublicSubnetIds value from the output of the CloudFormation template for the VPC.
    9 The master security group ID (for registering temporary rules)
    10 Specify the MasterSecurityGroupId value from the output of the CloudFormation template for the security group and roles.
    11 The VPC created resources will belong to.
    12 Specify the VpcId value from the output of the CloudFormation template for the VPC.
    13 Location to fetch bootstrap Ignition config file from.
    14 Specify the S3 bucket and file name in the form s3://<bucket_name>/bootstrap.ign.
    15 Whether or not to register a network load balancer (NLB).
    16 Specify yes or no. If you specify yes, you must provide a Lambda Amazon Resource Name (ARN) value.
    17 The ARN for NLB IP target registration lambda group.
    18 Specify the RegisterNlbIpTargetsLambda value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region.
    19 The ARN for external API load balancer target group.
    20 Specify the ExternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region.
    21 The ARN for internal API load balancer target group.
    22 Specify the InternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region.
    23 The ARN for internal service load balancer target group.
    24 Specify the InternalServiceTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region.
  3. Copy the template from the CloudFormation template for the bootstrap machine section of this topic and save it as a YAML file on your computer. This template describes the bootstrap machine that your cluster requires.

  4. Launch the CloudFormation template to create a stack of AWS resources that represent the bootstrap node:

    You must enter the command on a single line.

    $ aws cloudformation create-stack --stack-name <name> (1)
         --template-body file://<template>.yaml (2)
         --parameters file://<parameters>.json (3)
         --capabilities CAPABILITY_NAMED_IAM (4)
    
    1 <name> is the name for the CloudFormation stack, such as cluster-bootstrap. You need the name of this stack if you remove the cluster.
    2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved.
    3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file.
    4 You must explicitly declare the CAPABILITY_NAMED_IAM capability because the provided template creates some AWS::IAM::Role and AWS::IAM::InstanceProfile resources.
    Example output
    arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-bootstrap/12944486-2add-11eb-9dee-12dace8e3a83
  5. Confirm that the template components exist:

    $ aws cloudformation describe-stacks --stack-name <name>

    After the StackStatus displays CREATE_COMPLETE, the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster:

    BootstrapInstanceId

    The bootstrap Instance ID.

    BootstrapPublicIp

    The bootstrap node public IP address.

    BootstrapPrivateIp

    The bootstrap node private IP address.

CloudFormation template for the bootstrap machine

You can use the following CloudFormation template to deploy the bootstrap machine that you need for your OpenShift Container Platform cluster.

CloudFormation template for the bootstrap machine
AWSTemplateFormatVersion: 2010-09-09
Description: Template for OpenShift Cluster Bootstrap (EC2 Instance, Security Groups and IAM)

Parameters:
  InfrastructureName:
    AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})$
    MaxLength: 27
    MinLength: 1
    ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters.
    Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster.
    Type: String
  RhcosAmi:
    Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap.
    Type: AWS::EC2::Image::Id
  AllowedBootstrapSshCidr:
    AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/([0-9]|1[0-9]|2[0-9]|3[0-2]))$
    ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/0-32.
    Default: 0.0.0.0/0
    Description: CIDR block to allow SSH access to the bootstrap node.
    Type: String
  PublicSubnet:
    Description: The public subnet to launch the bootstrap node into.
    Type: AWS::EC2::Subnet::Id
  MasterSecurityGroupId:
    Description: The master security group ID for registering temporary rules.
    Type: AWS::EC2::SecurityGroup::Id
  VpcId:
    Description: The VPC-scoped resources will belong to this VPC.
    Type: AWS::EC2::VPC::Id
  BootstrapIgnitionLocation:
    Default: s3://my-s3-bucket/bootstrap.ign
    Description: Ignition config file location.
    Type: String
  AutoRegisterELB:
    Default: "yes"
    AllowedValues:
    - "yes"
    - "no"
    Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter?
    Type: String
  RegisterNlbIpTargetsLambdaArn:
    Description: ARN for NLB IP target registration lambda.
    Type: String
  ExternalApiTargetGroupArn:
    Description: ARN for external API load balancer target group.
    Type: String
  InternalApiTargetGroupArn:
    Description: ARN for internal API load balancer target group.
    Type: String
  InternalServiceTargetGroupArn:
    Description: ARN for internal service load balancer target group.
    Type: String

Metadata:
  AWS::CloudFormation::Interface:
    ParameterGroups:
    - Label:
        default: "Cluster Information"
      Parameters:
      - InfrastructureName
    - Label:
        default: "Host Information"
      Parameters:
      - RhcosAmi
      - BootstrapIgnitionLocation
      - MasterSecurityGroupId
    - Label:
        default: "Network Configuration"
      Parameters:
      - VpcId
      - AllowedBootstrapSshCidr
      - PublicSubnet
    - Label:
        default: "Load Balancer Automation"
      Parameters:
      - AutoRegisterELB
      - RegisterNlbIpTargetsLambdaArn
      - ExternalApiTargetGroupArn
      - InternalApiTargetGroupArn
      - InternalServiceTargetGroupArn
    ParameterLabels:
      InfrastructureName:
        default: "Infrastructure Name"
      VpcId:
        default: "VPC ID"
      AllowedBootstrapSshCidr:
        default: "Allowed SSH Source"
      PublicSubnet:
        default: "Public Subnet"
      RhcosAmi:
        default: "Red Hat Enterprise Linux CoreOS AMI ID"
      BootstrapIgnitionLocation:
        default: "Bootstrap Ignition Source"
      MasterSecurityGroupId:
        default: "Master Security Group ID"
      AutoRegisterELB:
        default: "Use Provided ELB Automation"

Conditions:
  DoRegistration: !Equals ["yes", !Ref AutoRegisterELB]

Resources:
  BootstrapIamRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
        - Effect: "Allow"
          Principal:
            Service:
            - "ec2.amazonaws.com"
          Action:
          - "sts:AssumeRole"
      Path: "/"
      Policies:
      - PolicyName: !Join ["-", [!Ref InfrastructureName, "bootstrap", "policy"]]
        PolicyDocument:
          Version: "2012-10-17"
          Statement:
          - Effect: "Allow"
            Action: "ec2:Describe*"
            Resource: "*"
          - Effect: "Allow"
            Action: "ec2:AttachVolume"
            Resource: "*"
          - Effect: "Allow"
            Action: "ec2:DetachVolume"
            Resource: "*"
          - Effect: "Allow"
            Action: "s3:GetObject"
            Resource: "*"

  BootstrapInstanceProfile:
    Type: "AWS::IAM::InstanceProfile"
    Properties:
      Path: "/"
      Roles:
      - Ref: "BootstrapIamRole"

  BootstrapSecurityGroup:
    Type: AWS::EC2::SecurityGroup
    Properties:
      GroupDescription: Cluster Bootstrap Security Group
      SecurityGroupIngress:
      - IpProtocol: tcp
        FromPort: 22
        ToPort: 22
        CidrIp: !Ref AllowedBootstrapSshCidr
      - IpProtocol: tcp
        ToPort: 19531
        FromPort: 19531
        CidrIp: 0.0.0.0/0
      VpcId: !Ref VpcId

  BootstrapInstance:
    Type: AWS::EC2::Instance
    Properties:
      ImageId: !Ref RhcosAmi
      IamInstanceProfile: !Ref BootstrapInstanceProfile
      InstanceType: "i3.large"
      NetworkInterfaces:
      - AssociatePublicIpAddress: "true"
        DeviceIndex: "0"
        GroupSet:
        - !Ref "BootstrapSecurityGroup"
        - !Ref "MasterSecurityGroupId"
        SubnetId: !Ref "PublicSubnet"
      UserData:
        Fn::Base64: !Sub
        - '{"ignition":{"config":{"replace":{"source":"${S3Loc}"}},"version":"3.1.0"}}'
        - {
          S3Loc: !Ref BootstrapIgnitionLocation
        }

  RegisterBootstrapApiTarget:
    Condition: DoRegistration
    Type: Custom::NLBRegister
    Properties:
      ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
      TargetArn: !Ref ExternalApiTargetGroupArn
      TargetIp: !GetAtt BootstrapInstance.PrivateIp

  RegisterBootstrapInternalApiTarget:
    Condition: DoRegistration
    Type: Custom::NLBRegister
    Properties:
      ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
      TargetArn: !Ref InternalApiTargetGroupArn
      TargetIp: !GetAtt BootstrapInstance.PrivateIp

  RegisterBootstrapInternalServiceTarget:
    Condition: DoRegistration
    Type: Custom::NLBRegister
    Properties:
      ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
      TargetArn: !Ref InternalServiceTargetGroupArn
      TargetIp: !GetAtt BootstrapInstance.PrivateIp

Outputs:
  BootstrapInstanceId:
    Description: Bootstrap Instance ID.
    Value: !Ref BootstrapInstance

  BootstrapPublicIp:
    Description: The bootstrap node public IP address.
    Value: !GetAtt BootstrapInstance.PublicIp

  BootstrapPrivateIp:
    Description: The bootstrap node private IP address.
    Value: !GetAtt BootstrapInstance.PrivateIp
Additional resources

Creating the control plane machines in AWS

You must create the control plane machines in Amazon Web Services (AWS) that your cluster will use.

You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the control plane nodes.

The CloudFormation template creates a stack that represents three control plane nodes.

If you do not use the provided CloudFormation template to create your control plane nodes, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.

Prerequisites
  • You configured an AWS account.

  • You added your AWS keys and region to your local AWS profile by running aws configure.

  • You generated the Ignition config files for your cluster.

  • You created and configured a VPC and associated subnets in AWS.

  • You created and configured DNS, load balancers, and listeners in AWS.

  • You created the security groups and roles required for your cluster in AWS.

  • You created the bootstrap machine.

Procedure
  1. Create a JSON file that contains the parameter values that the template requires:

    [
      {
        "ParameterKey": "InfrastructureName", (1)
        "ParameterValue": "mycluster-<random_string>" (2)
      },
      {
        "ParameterKey": "RhcosAmi", (3)
        "ParameterValue": "ami-<random_string>" (4)
      },
      {
        "ParameterKey": "AutoRegisterDNS", (5)
        "ParameterValue": "yes" (6)
      },
      {
        "ParameterKey": "PrivateHostedZoneId", (7)
        "ParameterValue": "<random_string>" (8)
      },
      {
        "ParameterKey": "PrivateHostedZoneName", (9)
        "ParameterValue": "mycluster.example.com" (10)
      },
      {
        "ParameterKey": "Master0Subnet", (11)
        "ParameterValue": "subnet-<random_string>" (12)
      },
      {
        "ParameterKey": "Master1Subnet", (11)
        "ParameterValue": "subnet-<random_string>" (12)
      },
      {
        "ParameterKey": "Master2Subnet", (11)
        "ParameterValue": "subnet-<random_string>" (12)
      },
      {
        "ParameterKey": "MasterSecurityGroupId", (13)
        "ParameterValue": "sg-<random_string>" (14)
      },
      {
        "ParameterKey": "IgnitionLocation", (15)
        "ParameterValue": "https://api-int.<cluster_name>.<domain_name>:22623/config/master" (16)
      },
      {
        "ParameterKey": "CertificateAuthorities", (17)
        "ParameterValue": "data:text/plain;charset=utf-8;base64,ABC...xYz==" (18)
      },
      {
        "ParameterKey": "MasterInstanceProfileName", (19)
        "ParameterValue": "<roles_stack>-MasterInstanceProfile-<random_string>" (20)
      },
      {
        "ParameterKey": "MasterInstanceType", (21)
        "ParameterValue": "m5.xlarge" (22)
      },
      {
        "ParameterKey": "AutoRegisterELB", (23)
        "ParameterValue": "yes" (24)
      },
      {
        "ParameterKey": "RegisterNlbIpTargetsLambdaArn", (25)
        "ParameterValue": "arn:aws:lambda:<region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>" (26)
      },
      {
        "ParameterKey": "ExternalApiTargetGroupArn", (27)
        "ParameterValue": "arn:aws:elasticloadbalancing:<region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>" (28)
      },
      {
        "ParameterKey": "InternalApiTargetGroupArn", (29)
        "ParameterValue": "arn:aws:elasticloadbalancing:<region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" (30)
      },
      {
        "ParameterKey": "InternalServiceTargetGroupArn", (31)
        "ParameterValue": "arn:aws:elasticloadbalancing:<region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" (32)
      }
    ]
    1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster.
    2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string>.
    3 CurrentRed Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the control plane machines.
    4 Specify an AWS::EC2::Image::Id value.
    5 Whether or not to perform DNS etcd registration.
    6 Specify yes or no. If you specify yes, you must provide hosted zone information.
    7 The Route 53 private zone ID to register the etcd targets with.
    8 Specify the PrivateHostedZoneId value from the output of the CloudFormation template for DNS and load balancing.
    9 The Route 53 zone to register the targets with.
    10 Specify <cluster_name>.<domain_name> where <domain_name> is the Route 53 base domain that you used when you generated install-config.yaml file for the cluster. Do not include the trailing period (.) that is displayed in the AWS console.
    11 A subnet, preferably private, to launch the control plane machines on.
    12 Specify a subnet from the PrivateSubnets value from the output of the CloudFormation template for DNS and load balancing.
    13 The master security group ID to associate with control plane nodes (also known as the master nodes).
    14 Specify the MasterSecurityGroupId value from the output of the CloudFormation template for the security group and roles.
    15 The location to fetch control plane Ignition config file from.
    16 Specify the generated Ignition config file location, https://api-int.<cluster_name>.<domain_name>:22623/config/master.
    17 The base64 encoded certificate authority string to use.
    18 Specify the value from the master.ign file that is in the installation directory. This value is the long string with the format data:text/plain;charset=utf-8;base64,ABC…​xYz==.
    19 The IAM profile to associate with control plane nodes.
    20 Specify the MasterInstanceProfile parameter value from the output of the CloudFormation template for the security group and roles.
    21 The type of AWS instance to use for the control plane machines.
    22 Allowed values:
    • m4.xlarge

    • m4.2xlarge

    • m4.4xlarge

    • m4.10xlarge

    • m4.16xlarge

    • m5.xlarge

    • m5.2xlarge

    • m5.4xlarge

    • m5.8xlarge

    • m5.12xlarge

    • m5.16xlarge

    • m5a.xlarge

    • m5a.2xlarge

    • m5a.4xlarge

    • m5a.8xlarge

    • m5a.12xlarge

    • m5a.16xlarge

    • c4.2xlarge

    • c4.4xlarge

    • c4.8xlarge

    • c5.2xlarge

    • c5.4xlarge

    • c5.9xlarge

    • c5.12xlarge

    • c5.18xlarge

    • c5.24xlarge

    • c5a.2xlarge

    • c5a.4xlarge

    • c5a.8xlarge

    • c5a.12xlarge

    • c5a.16xlarge

    • c5a.24xlarge

    • r4.xlarge

    • r4.2xlarge

    • r4.4xlarge

    • r4.8xlarge

    • r4.16xlarge

    • r5.xlarge

    • r5.2xlarge

    • r5.4xlarge

    • r5.8xlarge

    • r5.12xlarge

    • r5.16xlarge

    • r5.24xlarge

    • r5a.xlarge

    • r5a.2xlarge

    • r5a.4xlarge

    • r5a.8xlarge

    • r5a.12xlarge

    • r5a.16xlarge

    • r5a.24xlarge

    23 Whether or not to register a network load balancer (NLB).
    24 Specify yes or no. If you specify yes, you must provide a Lambda Amazon Resource Name (ARN) value.
    25 The ARN for NLB IP target registration lambda group.
    26 Specify the RegisterNlbIpTargetsLambda value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region.
    27 The ARN for external API load balancer target group.
    28 Specify the ExternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region.
    29 The ARN for internal API load balancer target group.
    30 Specify the InternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region.
    31 The ARN for internal service load balancer target group.
    32 Specify the InternalServiceTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region.
  2. Copy the template from the CloudFormation template for control plane machines section of this topic and save it as a YAML file on your computer. This template describes the control plane machines that your cluster requires.

  3. If you specified an m5 instance type as the value for MasterInstanceType, add that instance type to the MasterInstanceType.AllowedValues parameter in the CloudFormation template.

  4. Launch the CloudFormation template to create a stack of AWS resources that represent the control plane nodes:

    You must enter the command on a single line.

    $ aws cloudformation create-stack --stack-name <name> (1)
         --template-body file://<template>.yaml (2)
         --parameters file://<parameters>.json (3)
    1 <name> is the name for the CloudFormation stack, such as cluster-control-plane. You need the name of this stack if you remove the cluster.
    2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved.
    3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file.
    Example output
    arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-control-plane/21c7e2b0-2ee2-11eb-c6f6-0aa34627df4b

    The CloudFormation template creates a stack that represents three control plane nodes.

  5. Confirm that the template components exist:

    $ aws cloudformation describe-stacks --stack-name <name>

CloudFormation template for control plane machines

You can use the following CloudFormation template to deploy the control plane machines that you need for your OpenShift Container Platform cluster.

CloudFormation template for control plane machines
AWSTemplateFormatVersion: 2010-09-09
Description: Template for OpenShift Cluster Node Launch (EC2 master instances)

Parameters:
  InfrastructureName:
    AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})$
    MaxLength: 27
    MinLength: 1
    ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters.
    Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider.
    Type: String
  RhcosAmi:
    Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap.
    Type: AWS::EC2::Image::Id
  AutoRegisterDNS:
    Default: "yes"
    AllowedValues:
    - "yes"
    - "no"
    Description: Do you want to invoke DNS etcd registration, which requires Hosted Zone information?
    Type: String
  PrivateHostedZoneId:
    Description: The Route53 private zone ID to register the etcd targets with, such as Z21IXYZABCZ2A4.
    Type: String
  PrivateHostedZoneName:
    Description: The Route53 zone to register the targets with, such as cluster.example.com. Omit the trailing period.
    Type: String
  Master0Subnet:
    Description: The subnets, recommend private, to launch the master nodes into.
    Type: AWS::EC2::Subnet::Id
  Master1Subnet:
    Description: The subnets, recommend private, to launch the master nodes into.
    Type: AWS::EC2::Subnet::Id
  Master2Subnet:
    Description: The subnets, recommend private, to launch the master nodes into.
    Type: AWS::EC2::Subnet::Id
  MasterSecurityGroupId:
    Description: The master security group ID to associate with master nodes.
    Type: AWS::EC2::SecurityGroup::Id
  IgnitionLocation:
    Default: https://api-int.$CLUSTER_NAME.$DOMAIN:22623/config/master
    Description: Ignition config file location.
    Type: String
  CertificateAuthorities:
    Default: data:text/plain;charset=utf-8;base64,ABC...xYz==
    Description: Base64 encoded certificate authority string to use.
    Type: String
  MasterInstanceProfileName:
    Description: IAM profile to associate with master nodes.
    Type: String
  MasterInstanceType:
    Default: m5.xlarge
    Type: String
    AllowedValues:
    - "m4.xlarge"
    - "m4.2xlarge"
    - "m4.4xlarge"
    - "m4.10xlarge"
    - "m4.16xlarge"
    - "m5.xlarge"
    - "m5.2xlarge"
    - "m5.4xlarge"
    - "m5.8xlarge"
    - "m5.12xlarge"
    - "m5.16xlarge"
    - "m5a.xlarge"
    - "m5a.2xlarge"
    - "m5a.4xlarge"
    - "m5a.8xlarge"
    - "m5a.12xlarge"
    - "m5a.16xlarge"
    - "c4.2xlarge"
    - "c4.4xlarge"
    - "c4.8xlarge"
    - "c5.2xlarge"
    - "c5.4xlarge"
    - "c5.9xlarge"
    - "c5.12xlarge"
    - "c5.18xlarge"
    - "c5.24xlarge"
    - "c5a.2xlarge"
    - "c5a.4xlarge"
    - "c5a.8xlarge"
    - "c5a.12xlarge"
    - "c5a.16xlarge"
    - "c5a.24xlarge"
    - "r4.xlarge"
    - "r4.2xlarge"
    - "r4.4xlarge"
    - "r4.8xlarge"
    - "r4.16xlarge"
    - "r5.xlarge"
    - "r5.2xlarge"
    - "r5.4xlarge"
    - "r5.8xlarge"
    - "r5.12xlarge"
    - "r5.16xlarge"
    - "r5.24xlarge"
    - "r5a.xlarge"
    - "r5a.2xlarge"
    - "r5a.4xlarge"
    - "r5a.8xlarge"
    - "r5a.12xlarge"
    - "r5a.16xlarge"
    - "r5a.24xlarge"

  AutoRegisterELB:
    Default: "yes"
    AllowedValues:
    - "yes"
    - "no"
    Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter?
    Type: String
  RegisterNlbIpTargetsLambdaArn:
    Description: ARN for NLB IP target registration lambda. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB.
    Type: String
  ExternalApiTargetGroupArn:
    Description: ARN for external API load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB.
    Type: String
  InternalApiTargetGroupArn:
    Description: ARN for internal API load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB.
    Type: String
  InternalServiceTargetGroupArn:
    Description: ARN for internal service load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB.
    Type: String

Metadata:
  AWS::CloudFormation::Interface:
    ParameterGroups:
    - Label:
        default: "Cluster Information"
      Parameters:
      - InfrastructureName
    - Label:
        default: "Host Information"
      Parameters:
      - MasterInstanceType
      - RhcosAmi
      - IgnitionLocation
      - CertificateAuthorities
      - MasterSecurityGroupId
      - MasterInstanceProfileName
    - Label:
        default: "Network Configuration"
      Parameters:
      - VpcId
      - AllowedBootstrapSshCidr
      - Master0Subnet
      - Master1Subnet
      - Master2Subnet
    - Label:
        default: "DNS"
      Parameters:
      - AutoRegisterDNS
      - PrivateHostedZoneName
      - PrivateHostedZoneId
    - Label:
        default: "Load Balancer Automation"
      Parameters:
      - AutoRegisterELB
      - RegisterNlbIpTargetsLambdaArn
      - ExternalApiTargetGroupArn
      - InternalApiTargetGroupArn
      - InternalServiceTargetGroupArn
    ParameterLabels:
      InfrastructureName:
        default: "Infrastructure Name"
      VpcId:
        default: "VPC ID"
      Master0Subnet:
        default: "Master-0 Subnet"
      Master1Subnet:
        default: "Master-1 Subnet"
      Master2Subnet:
        default: "Master-2 Subnet"
      MasterInstanceType:
        default: "Master Instance Type"
      MasterInstanceProfileName:
        default: "Master Instance Profile Name"
      RhcosAmi:
        default: "Red Hat Enterprise Linux CoreOS AMI ID"
      BootstrapIgnitionLocation:
        default: "Master Ignition Source"
      CertificateAuthorities:
        default: "Ignition CA String"
      MasterSecurityGroupId:
        default: "Master Security Group ID"
      AutoRegisterDNS:
        default: "Use Provided DNS Automation"
      AutoRegisterELB:
        default: "Use Provided ELB Automation"
      PrivateHostedZoneName:
        default: "Private Hosted Zone Name"
      PrivateHostedZoneId:
        default: "Private Hosted Zone ID"

Conditions:
  DoRegistration: !Equals ["yes", !Ref AutoRegisterELB]
  DoDns: !Equals ["yes", !Ref AutoRegisterDNS]

Resources:
  Master0:
    Type: AWS::EC2::Instance
    Properties:
      ImageId: !Ref RhcosAmi
      BlockDeviceMappings:
      - DeviceName: /dev/xvda
        Ebs:
          VolumeSize: "120"
          VolumeType: "gp2"
      IamInstanceProfile: !Ref MasterInstanceProfileName
      InstanceType: !Ref MasterInstanceType
      NetworkInterfaces:
      - AssociatePublicIpAddress: "false"
        DeviceIndex: "0"
        GroupSet:
        - !Ref "MasterSecurityGroupId"
        SubnetId: !Ref "Master0Subnet"
      UserData:
        Fn::Base64: !Sub
        - '{"ignition":{"config":{"merge":[{"source":"${SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"${CA_BUNDLE}"}]}},"version":"3.1.0"}}'
        - {
          SOURCE: !Ref IgnitionLocation,
          CA_BUNDLE: !Ref CertificateAuthorities,
        }
      Tags:
      - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]]
        Value: "shared"

  RegisterMaster0:
    Condition: DoRegistration
    Type: Custom::NLBRegister
    Properties:
      ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
      TargetArn: !Ref ExternalApiTargetGroupArn
      TargetIp: !GetAtt Master0.PrivateIp

  RegisterMaster0InternalApiTarget:
    Condition: DoRegistration
    Type: Custom::NLBRegister
    Properties:
      ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
      TargetArn: !Ref InternalApiTargetGroupArn
      TargetIp: !GetAtt Master0.PrivateIp

  RegisterMaster0InternalServiceTarget:
    Condition: DoRegistration
    Type: Custom::NLBRegister
    Properties:
      ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
      TargetArn: !Ref InternalServiceTargetGroupArn
      TargetIp: !GetAtt Master0.PrivateIp

  Master1:
    Type: AWS::EC2::Instance
    Properties:
      ImageId: !Ref RhcosAmi
      BlockDeviceMappings:
      - DeviceName: /dev/xvda
        Ebs:
          VolumeSize: "120"
          VolumeType: "gp2"
      IamInstanceProfile: !Ref MasterInstanceProfileName
      InstanceType: !Ref MasterInstanceType
      NetworkInterfaces:
      - AssociatePublicIpAddress: "false"
        DeviceIndex: "0"
        GroupSet:
        - !Ref "MasterSecurityGroupId"
        SubnetId: !Ref "Master1Subnet"
      UserData:
        Fn::Base64: !Sub
        - '{"ignition":{"config":{"merge":[{"source":"${SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"${CA_BUNDLE}"}]}},"version":"3.1.0"}}'
        - {
          SOURCE: !Ref IgnitionLocation,
          CA_BUNDLE: !Ref CertificateAuthorities,
        }
      Tags:
      - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]]
        Value: "shared"

  RegisterMaster1:
    Condition: DoRegistration
    Type: Custom::NLBRegister
    Properties:
      ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
      TargetArn: !Ref ExternalApiTargetGroupArn
      TargetIp: !GetAtt Master1.PrivateIp

  RegisterMaster1InternalApiTarget:
    Condition: DoRegistration
    Type: Custom::NLBRegister
    Properties:
      ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
      TargetArn: !Ref InternalApiTargetGroupArn
      TargetIp: !GetAtt Master1.PrivateIp

  RegisterMaster1InternalServiceTarget:
    Condition: DoRegistration
    Type: Custom::NLBRegister
    Properties:
      ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
      TargetArn: !Ref InternalServiceTargetGroupArn
      TargetIp: !GetAtt Master1.PrivateIp

  Master2:
    Type: AWS::EC2::Instance
    Properties:
      ImageId: !Ref RhcosAmi
      BlockDeviceMappings:
      - DeviceName: /dev/xvda
        Ebs:
          VolumeSize: "120"
          VolumeType: "gp2"
      IamInstanceProfile: !Ref MasterInstanceProfileName
      InstanceType: !Ref MasterInstanceType
      NetworkInterfaces:
      - AssociatePublicIpAddress: "false"
        DeviceIndex: "0"
        GroupSet:
        - !Ref "MasterSecurityGroupId"
        SubnetId: !Ref "Master2Subnet"
      UserData:
        Fn::Base64: !Sub
        - '{"ignition":{"config":{"merge":[{"source":"${SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"${CA_BUNDLE}"}]}},"version":"3.1.0"}}'
        - {
          SOURCE: !Ref IgnitionLocation,
          CA_BUNDLE: !Ref CertificateAuthorities,
        }
      Tags:
      - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]]
        Value: "shared"

  RegisterMaster2:
    Condition: DoRegistration
    Type: Custom::NLBRegister
    Properties:
      ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
      TargetArn: !Ref ExternalApiTargetGroupArn
      TargetIp: !GetAtt Master2.PrivateIp

  RegisterMaster2InternalApiTarget:
    Condition: DoRegistration
    Type: Custom::NLBRegister
    Properties:
      ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
      TargetArn: !Ref InternalApiTargetGroupArn
      TargetIp: !GetAtt Master2.PrivateIp

  RegisterMaster2InternalServiceTarget:
    Condition: DoRegistration
    Type: Custom::NLBRegister
    Properties:
      ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
      TargetArn: !Ref InternalServiceTargetGroupArn
      TargetIp: !GetAtt Master2.PrivateIp

  EtcdSrvRecords:
    Condition: DoDns
    Type: AWS::Route53::RecordSet
    Properties:
      HostedZoneId: !Ref PrivateHostedZoneId
      Name: !Join [".", ["_etcd-server-ssl._tcp", !Ref PrivateHostedZoneName]]
      ResourceRecords:
      - !Join [
        " ",
        ["0 10 2380", !Join [".", ["etcd-0", !Ref PrivateHostedZoneName]]],
      ]
      - !Join [
        " ",
        ["0 10 2380", !Join [".", ["etcd-1", !Ref PrivateHostedZoneName]]],
      ]
      - !Join [
        " ",
        ["0 10 2380", !Join [".", ["etcd-2", !Ref PrivateHostedZoneName]]],
      ]
      TTL: 60
      Type: SRV

  Etcd0Record:
    Condition: DoDns
    Type: AWS::Route53::RecordSet
    Properties:
      HostedZoneId: !Ref PrivateHostedZoneId
      Name: !Join [".", ["etcd-0", !Ref PrivateHostedZoneName]]
      ResourceRecords:
      - !GetAtt Master0.PrivateIp
      TTL: 60
      Type: A

  Etcd1Record:
    Condition: DoDns
    Type: AWS::Route53::RecordSet
    Properties:
      HostedZoneId: !Ref PrivateHostedZoneId
      Name: !Join [".", ["etcd-1", !Ref PrivateHostedZoneName]]
      ResourceRecords:
      - !GetAtt Master1.PrivateIp
      TTL: 60
      Type: A

  Etcd2Record:
    Condition: DoDns
    Type: AWS::Route53::RecordSet
    Properties:
      HostedZoneId: !Ref PrivateHostedZoneId
      Name: !Join [".", ["etcd-2", !Ref PrivateHostedZoneName]]
      ResourceRecords:
      - !GetAtt Master2.PrivateIp
      TTL: 60
      Type: A

Outputs:
  PrivateIPs:
    Description: The control-plane node private IP addresses.
    Value:
      !Join [
        ",",
        [!GetAtt Master0.PrivateIp, !GetAtt Master1.PrivateIp, !GetAtt Master2.PrivateIp]
      ]
Additional resources

Creating the worker nodes in AWS

You can create worker nodes in Amazon Web Services (AWS) for your cluster to use.

You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent a worker node.

The CloudFormation template creates a stack that represents one worker node. You must create a stack for each worker node.

If you do not use the provided CloudFormation template to create your worker nodes, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.

Prerequisites
  • You configured an AWS account.

  • You added your AWS keys and region to your local AWS profile by running aws configure.

  • You generated the Ignition config files for your cluster.

  • You created and configured a VPC and associated subnets in AWS.

  • You created and configured DNS, load balancers, and listeners in AWS.

  • You created the security groups and roles required for your cluster in AWS.

  • You created the bootstrap machine.

  • You created the control plane machines.

Procedure
  1. Create a JSON file that contains the parameter values that the CloudFormation template requires:

    [
      {
        "ParameterKey": "InfrastructureName", (1)
        "ParameterValue": "mycluster-<random_string>" (2)
      },
      {
        "ParameterKey": "RhcosAmi", (3)
        "ParameterValue": "ami-<random_string>" (4)
      },
      {
        "ParameterKey": "Subnet", (5)
        "ParameterValue": "subnet-<random_string>" (6)
      },
      {
        "ParameterKey": "WorkerSecurityGroupId", (7)
        "ParameterValue": "sg-<random_string>" (8)
      },
      {
        "ParameterKey": "IgnitionLocation", (9)
        "ParameterValue": "https://api-int.<cluster_name>.<domain_name>:22623/config/worker" (10)
      },
      {
        "ParameterKey": "CertificateAuthorities", (11)
        "ParameterValue": "" (12)
      },
      {
        "ParameterKey": "WorkerInstanceProfileName", (13)
        "ParameterValue": "" (14)
      },
      {
        "ParameterKey": "WorkerInstanceType", (15)
        "ParameterValue": "m4.2xlarge" (16)
      }
    ]
    1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster.
    2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string>.
    3 Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the worker nodes.
    4 Specify an AWS::EC2::Image::Id value.
    5 A subnet, preferably private, to start the worker nodes on.
    6 Specify a subnet from the PrivateSubnets value from the output of the CloudFormation template for DNS and load balancing.
    7 The worker security group ID to associate with worker nodes.
    8 Specify the WorkerSecurityGroupId value from the output of the CloudFormation template for the security group and roles.
    9 The location to fetch the bootstrap Ignition config file from.
    10 Specify the generated Ignition config location, https://api-int.<cluster_name>.<domain_name>:22623/config/worker.
    11 Base64 encoded certificate authority string to use.
    12 Specify the value from the worker.ign file that is in the installation directory. This value is the long string with the format data:text/plain;charset=utf-8;base64,ABC…​xYz==.
    13 The IAM profile to associate with worker nodes.
    14 Specify the WorkerInstanceProfile parameter value from the output of the CloudFormation template for the security group and roles.
    15 The type of AWS instance to use for the control plane machines.
    16 Allowed values:
    • m4.large

    • m4.xlarge

    • m4.2xlarge

    • m4.4xlarge

    • m4.10xlarge

    • m4.16xlarge

    • m5.large

    • m5.xlarge

    • m5.2xlarge

    • m5.4xlarge

    • m5.8xlarge

    • m5.12xlarge

    • m5.16xlarge

    • m5a.large

    • m5a.xlarge

    • m5a.2xlarge

    • m5a.4xlarge

    • m5a.8xlarge

    • m5a.12xlarge

    • m5a.16xlarge

    • c4.large

    • c4.xlarge

    • c4.2xlarge

    • c4.4xlarge

    • c4.8xlarge

    • c5.large

    • c5.xlarge

    • c5.2xlarge

    • c5.4xlarge

    • c5.9xlarge

    • c5.12xlarge

    • c5.18xlarge

    • c5.24xlarge

    • c5a.large

    • c5a.xlarge

    • c5a.2xlarge

    • c5a.4xlarge

    • c5a.8xlarge

    • c5a.12xlarge

    • c5a.16xlarge

    • c5a.24xlarge

    • r4.large

    • r4.xlarge

    • r4.2xlarge

    • r4.4xlarge

    • r4.8xlarge

    • r4.16xlarge

    • r5.large

    • r5.xlarge

    • r5.2xlarge

    • r5.4xlarge

    • r5.8xlarge

    • r5.12xlarge

    • r5.16xlarge

    • r5.24xlarge

    • r5a.large

    • r5a.xlarge

    • r5a.2xlarge

    • r5a.4xlarge

    • r5a.8xlarge

    • r5a.12xlarge

    • r5a.16xlarge

    • r5a.24xlarge

    • t3.large

    • t3.xlarge

    • t3.2xlarge

    • t3a.large

    • t3a.xlarge

    • t3a.2xlarge

  2. Copy the template from the CloudFormation template for worker machines section of this topic and save it as a YAML file on your computer. This template describes the networking objects and load balancers that your cluster requires.

  3. Optional: If you specified an m5 instance type as the value for WorkerInstanceType, add that instance type to the WorkerInstanceType.AllowedValues parameter in the CloudFormation template.

  4. Optional: If you are deploying with an AWS Marketplace image, update the Worker0.type.properties.ImageID parameter with the AMI ID that you obtained from your subscription.

  5. Launch the CloudFormation template to create a stack of AWS resources that represent a worker node:

    You must enter the command on a single line.

    $ aws cloudformation create-stack --stack-name <name> (1)
         --template-body file://<template>.yaml \ (2)
         --parameters file://<parameters>.json (3)
    1 <name> is the name for the CloudFormation stack, such as cluster-worker-1. You need the name of this stack if you remove the cluster.
    2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved.
    3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file.
    Example output
    arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-worker-1/729ee301-1c2a-11eb-348f-sd9888c65b59

    The CloudFormation template creates a stack that represents one worker node.

  6. Confirm that the template components exist:

    $ aws cloudformation describe-stacks --stack-name <name>
  7. Continue to create worker stacks until you have created enough worker machines for your cluster. You can create additional worker stacks by referencing the same template and parameter files and specifying a different stack name.

    You must create at least two worker machines, so you must create at least two stacks that use this CloudFormation template.

CloudFormation template for worker machines

You can use the following CloudFormation template to deploy the worker machines that you need for your OpenShift Container Platform cluster.

CloudFormation template for worker machines
AWSTemplateFormatVersion: 2010-09-09
Description: Template for OpenShift Cluster Node Launch (EC2 worker instance)

Parameters:
  InfrastructureName:
    AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})$
    MaxLength: 27
    MinLength: 1
    ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters.
    Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider.
    Type: String
  RhcosAmi:
    Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap.
    Type: AWS::EC2::Image::Id
  Subnet:
    Description: The subnets, recommend private, to launch the master nodes into.
    Type: AWS::EC2::Subnet::Id
  WorkerSecurityGroupId:
    Description: The master security group ID to associate with master nodes.
    Type: AWS::EC2::SecurityGroup::Id
  IgnitionLocation:
    Default: https://api-int.$CLUSTER_NAME.$DOMAIN:22623/config/worker
    Description: Ignition config file location.
    Type: String
  CertificateAuthorities:
    Default: data:text/plain;charset=utf-8;base64,ABC...xYz==
    Description: Base64 encoded certificate authority string to use.
    Type: String
  WorkerInstanceProfileName:
    Description: IAM profile to associate with master nodes.
    Type: String
  WorkerInstanceType:
    Default: m5.large
    Type: String
    AllowedValues:
    - "m4.large"
    - "m4.xlarge"
    - "m4.2xlarge"
    - "m4.4xlarge"
    - "m4.10xlarge"
    - "m4.16xlarge"
    - "m5.large"
    - "m5.xlarge"
    - "m5.2xlarge"
    - "m5.4xlarge"
    - "m5.8xlarge"
    - "m5.12xlarge"
    - "m5.16xlarge"
    - "m5a.large"
    - "m5a.xlarge"
    - "m5a.2xlarge"
    - "m5a.4xlarge"
    - "m5a.8xlarge"
    - "m5a.12xlarge"
    - "m5a.16xlarge"
    - "c4.large"
    - "c4.xlarge"
    - "c4.2xlarge"
    - "c4.4xlarge"
    - "c4.8xlarge"
    - "c5.large"
    - "c5.xlarge"
    - "c5.2xlarge"
    - "c5.4xlarge"
    - "c5.9xlarge"
    - "c5.12xlarge"
    - "c5.18xlarge"
    - "c5.24xlarge"
    - "c5a.large"
    - "c5a.xlarge"
    - "c5a.2xlarge"
    - "c5a.4xlarge"
    - "c5a.8xlarge"
    - "c5a.12xlarge"
    - "c5a.16xlarge"
    - "c5a.24xlarge"
    - "r4.large"
    - "r4.xlarge"
    - "r4.2xlarge"
    - "r4.4xlarge"
    - "r4.8xlarge"
    - "r4.16xlarge"
    - "r5.large"
    - "r5.xlarge"
    - "r5.2xlarge"
    - "r5.4xlarge"
    - "r5.8xlarge"
    - "r5.12xlarge"
    - "r5.16xlarge"
    - "r5.24xlarge"
    - "r5a.large"
    - "r5a.xlarge"
    - "r5a.2xlarge"
    - "r5a.4xlarge"
    - "r5a.8xlarge"
    - "r5a.12xlarge"
    - "r5a.16xlarge"
    - "r5a.24xlarge"
    - "t3.large"
    - "t3.xlarge"
    - "t3.2xlarge"
    - "t3a.large"
    - "t3a.xlarge"
    - "t3a.2xlarge"

Metadata:
  AWS::CloudFormation::Interface:
    ParameterGroups:
    - Label:
        default: "Cluster Information"
      Parameters:
      - InfrastructureName
    - Label:
        default: "Host Information"
      Parameters:
      - WorkerInstanceType
      - RhcosAmi
      - IgnitionLocation
      - CertificateAuthorities
      - WorkerSecurityGroupId
      - WorkerInstanceProfileName
    - Label:
        default: "Network Configuration"
      Parameters:
      - Subnet
    ParameterLabels:
      Subnet:
        default: "Subnet"
      InfrastructureName:
        default: "Infrastructure Name"
      WorkerInstanceType:
        default: "Worker Instance Type"
      WorkerInstanceProfileName:
        default: "Worker Instance Profile Name"
      RhcosAmi:
        default: "Red Hat Enterprise Linux CoreOS AMI ID"
      IgnitionLocation:
        default: "Worker Ignition Source"
      CertificateAuthorities:
        default: "Ignition CA String"
      WorkerSecurityGroupId:
        default: "Worker Security Group ID"

Resources:
  Worker0:
    Type: AWS::EC2::Instance
    Properties:
      ImageId: !Ref RhcosAmi
      BlockDeviceMappings:
      - DeviceName: /dev/xvda
        Ebs:
          VolumeSize: "120"
          VolumeType: "gp2"
      IamInstanceProfile: !Ref WorkerInstanceProfileName
      InstanceType: !Ref WorkerInstanceType
      NetworkInterfaces:
      - AssociatePublicIpAddress: "false"
        DeviceIndex: "0"
        GroupSet:
        - !Ref "WorkerSecurityGroupId"
        SubnetId: !Ref "Subnet"
      UserData:
        Fn::Base64: !Sub
        - '{"ignition":{"config":{"merge":[{"source":"${SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"${CA_BUNDLE}"}]}},"version":"3.1.0"}}'
        - {
          SOURCE: !Ref IgnitionLocation,
          CA_BUNDLE: !Ref CertificateAuthorities,
        }
      Tags:
      - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]]
        Value: "shared"

Outputs:
  PrivateIP:
    Description: The compute node private IP address.
    Value: !GetAtt Worker0.PrivateIp
Additional resources

Initializing the bootstrap sequence on AWS with user-provisioned infrastructure

After you create all of the required infrastructure in Amazon Web Services (AWS), you can start the bootstrap sequence that initializes the OpenShift Container Platform control plane.

Prerequisites
  • You configured an AWS account.

  • You added your AWS keys and region to your local AWS profile by running aws configure.

  • You generated the Ignition config files for your cluster.

  • You created and configured a VPC and associated subnets in AWS.

  • You created and configured DNS, load balancers, and listeners in AWS.

  • You created the security groups and roles required for your cluster in AWS.

  • You created the bootstrap machine.

  • You created the control plane machines.

  • You created the worker nodes.

Procedure
  1. Change to the directory that contains the installation program and start the bootstrap process that initializes the OpenShift Container Platform control plane:

    $ ./openshift-install wait-for bootstrap-complete --dir <installation_directory> \ (1)
        --log-level=info (2)
    
    1 For <installation_directory>, specify the path to the directory that you stored the installation files in.
    2 To view different installation details, specify warn, debug, or error instead of info.
    Example output
    INFO Waiting up to 20m0s for the Kubernetes API at https://api.mycluster.example.com:6443...
    INFO API v1.19.0+9f84db3 up
    INFO Waiting up to 30m0s for bootstrapping to complete...
    INFO It is now safe to remove the bootstrap resources
    INFO Time elapsed: 1s

    If the command exits without a FATAL warning, your OpenShift Container Platform control plane has initialized.

    After the control plane initializes, it sets up the compute nodes and installs additional services in the form of Operators.

Additional resources

Installing the OpenShift CLI by downloading the binary

You can install the OpenShift CLI (oc) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS.

If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform 4.8. Download and install the new version of oc.

Installing the OpenShift CLI on Linux

You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.

Procedure
  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.

  2. Select the appropriate version in the Version drop-down menu.

  3. Click Download Now next to the OpenShift v4.8 Linux Client entry and save the file.

  4. Unpack the archive:

    $ tar xvzf <file>
  5. Place the oc binary in a directory that is on your PATH.

    To check your PATH, execute the following command:

    $ echo $PATH

After you install the OpenShift CLI, it is available using the oc command:

$ oc <command>

Installing the OpenShift CLI on Windows

You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.

Procedure
  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.

  2. Select the appropriate version in the Version drop-down menu.

  3. Click Download Now next to the OpenShift v4.8 Windows Client entry and save the file.

  4. Unzip the archive with a ZIP program.

  5. Move the oc binary to a directory that is on your PATH.

    To check your PATH, open the command prompt and execute the following command:

    C:\> path

After you install the OpenShift CLI, it is available using the oc command:

C:\> oc <command>

Installing the OpenShift CLI on macOS

You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.

Procedure
  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.

  2. Select the appropriate version in the Version drop-down menu.

  3. Click Download Now next to the OpenShift v4.8 MacOSX Client entry and save the file.

  4. Unpack and unzip the archive.

  5. Move the oc binary to a directory on your PATH.

    To check your PATH, open a terminal and execute the following command:

    $ echo $PATH

After you install the OpenShift CLI, it is available using the oc command:

$ oc <command>

Logging in to the cluster by using the CLI

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.

Prerequisites
  • You deployed an OpenShift Container Platform cluster.

  • You installed the oc CLI.

Procedure
  1. Export the kubeadmin credentials:

    $ export KUBECONFIG=<installation_directory>/auth/kubeconfig (1)
    1 For <installation_directory>, specify the path to the directory that you stored the installation files in.
  2. Verify you can run oc commands successfully using the exported configuration:

    $ oc whoami
    Example output
    system:admin

Approving the certificate signing requests for your machines

When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests.

Prerequisites
  • You added machines to your cluster.

Procedure
  1. Confirm that the cluster recognizes the machines:

    $ oc get nodes
    Example output
    NAME      STATUS    ROLES   AGE  VERSION
    master-0  Ready     master  63m  v1.21.0
    master-1  Ready     master  63m  v1.21.0
    master-2  Ready     master  64m  v1.21.0

    The output lists all of the machines that you created.

    The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved.

  2. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster:

    $ oc get csr
    Example output
    NAME        AGE     REQUESTOR                                                                   CONDITION
    csr-8b2br   15m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending
    csr-8vnps   15m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending
    ...

    In this example, two machines are joining the cluster. You might see more approved CSRs in the list.

  3. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines:

    Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters.

    For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec, oc rsh, and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node.

    • To approve them individually, run the following command for each valid CSR:

      $ oc adm certificate approve <csr_name> (1)
      1 <csr_name> is the name of a CSR from the list of current CSRs.
    • To approve all pending CSRs, run the following command:

      $ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve

      Some Operators might not become available until some CSRs are approved.

  4. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster:

    $ oc get csr
    Example output
    NAME        AGE     REQUESTOR                                                                   CONDITION
    csr-bfd72   5m26s   system:node:ip-10-0-50-126.us-east-2.compute.internal                       Pending
    csr-c57lv   5m26s   system:node:ip-10-0-95-157.us-east-2.compute.internal                       Pending
    ...
  5. If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines:

    • To approve them individually, run the following command for each valid CSR:

      $ oc adm certificate approve <csr_name> (1)
      1 <csr_name> is the name of a CSR from the list of current CSRs.
    • To approve all pending CSRs, run the following command:

      $ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
  6. After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command:

    $ oc get nodes
    Example output
    NAME      STATUS    ROLES   AGE  VERSION
    master-0  Ready     master  73m  v1.21.0
    master-1  Ready     master  73m  v1.21.0
    master-2  Ready     master  74m  v1.21.0
    worker-0  Ready     worker  11m  v1.21.0
    worker-1  Ready     worker  11m  v1.21.0

    It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status.

Additional information

Initial Operator configuration

After the control plane initializes, you must immediately configure some Operators so that they all become available.

Prerequisites
  • Your control plane has initialized.

Procedure
  1. Watch the cluster components come online:

    $ watch -n5 oc get clusteroperators
    Example output
    NAME                                       VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE
    authentication                             4.8.2     True        False         False      19m
    baremetal                                  4.8.2     True        False         False      37m
    cloud-credential                           4.8.2     True        False         False      40m
    cluster-autoscaler                         4.8.2     True        False         False      37m
    config-operator                            4.8.2     True        False         False      38m
    console                                    4.8.2     True        False         False      26m
    csi-snapshot-controller                    4.8.2     True        False         False      37m
    dns                                        4.8.2     True        False         False      37m
    etcd                                       4.8.2     True        False         False      36m
    image-registry                             4.8.2     True        False         False      31m
    ingress                                    4.8.2     True        False         False      30m
    insights                                   4.8.2     True        False         False      31m
    kube-apiserver                             4.8.2     True        False         False      26m
    kube-controller-manager                    4.8.2     True        False         False      36m
    kube-scheduler                             4.8.2     True        False         False      36m
    kube-storage-version-migrator              4.8.2     True        False         False      37m
    machine-api                                4.8.2     True        False         False      29m
    machine-approver                           4.8.2     True        False         False      37m
    machine-config                             4.8.2     True        False         False      36m
    marketplace                                4.8.2     True        False         False      37m
    monitoring                                 4.8.2     True        False         False      29m
    network                                    4.8.2     True        False         False      38m
    node-tuning                                4.8.2     True        False         False      37m
    openshift-apiserver                        4.8.2     True        False         False      32m
    openshift-controller-manager               4.8.2     True        False         False      30m
    openshift-samples                          4.8.2     True        False         False      32m
    operator-lifecycle-manager                 4.8.2     True        False         False      37m
    operator-lifecycle-manager-catalog         4.8.2     True        False         False      37m
    operator-lifecycle-manager-packageserver   4.8.2     True        False         False      32m
    service-ca                                 4.8.2     True        False         False      38m
    storage                                    4.8.2     True        False         False      37m
  2. Configure the Operators that are not available.

Image registry storage configuration

Amazon Web Services provides default storage, which means the Image Registry Operator is available after installation. However, if the Registry Operator cannot create an S3 bucket and automatically configure storage, you must manually configure registry storage.

Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters.

Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades.

You can configure registry storage for user-provisioned infrastructure in AWS to deploy OpenShift Container Platform to hidden regions. See Configuring the registry for AWS user-provisioned infrastructure for more information.

Configuring registry storage for AWS with user-provisioned infrastructure

During installation, your cloud credentials are sufficient to create an Amazon S3 bucket and the Registry Operator will automatically configure storage.

If the Registry Operator cannot create an S3 bucket and automatically configure storage, you can create an S3 bucket and configure storage with the following procedure.

Prerequisites
  • You have a cluster on AWS with user-provisioned infrastructure.

  • For Amazon S3 storage, the secret is expected to contain two keys:

    • REGISTRY_STORAGE_S3_ACCESSKEY

    • REGISTRY_STORAGE_S3_SECRETKEY

Procedure

Use the following procedure if the Registry Operator cannot create an S3 bucket and automatically configure storage.

  1. Set up a Bucket Lifecycle Policy to abort incomplete multipart uploads that are one day old.

  2. Fill in the storage configuration in configs.imageregistry.operator.openshift.io/cluster:

    $ oc edit configs.imageregistry.operator.openshift.io/cluster
    Example configuration
    storage:
      s3:
        bucket: <bucket-name>
        region: <region-name>

To secure your registry images in AWS, block public access to the S3 bucket.

Configuring storage for the image registry in non-production clusters

You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry.

Procedure
  • To set the image registry storage to an empty directory:

    $ oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}'

    Configure this option for only non-production clusters.

    If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error:

    Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found

    Wait a few minutes and run the command again.

Deleting the bootstrap resources

After you complete the initial Operator configuration for the cluster, remove the bootstrap resources from Amazon Web Services (AWS).

Prerequisites
  • You completed the initial Operator configuration for your cluster.

Procedure
  1. Delete the bootstrap resources. If you used the CloudFormation template, delete its stack:

    • Delete the stack by using the AWS CLI:

      $ aws cloudformation delete-stack --stack-name <name> (1)
      1 <name> is the name of your bootstrap stack.
    • Delete the stack by using the AWS CloudFormation console.

Creating the Ingress DNS Records

If you removed the DNS Zone configuration, manually create DNS records that point to the Ingress load balancer. You can create either a wildcard record or specific records. While the following procedure uses A records, you can use other record types that you require, such as CNAME or alias.

Prerequisites
Procedure
  1. Determine the routes to create.

    • To create a wildcard record, use *.apps.<cluster_name>.<domain_name>, where <cluster_name> is your cluster name, and <domain_name> is the Route 53 base domain for your OpenShift Container Platform cluster.

    • To create specific records, you must create a record for each route that your cluster uses, as shown in the output of the following command:

      $ oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes
      Example output
      oauth-openshift.apps.<cluster_name>.<domain_name>
      console-openshift-console.apps.<cluster_name>.<domain_name>
      downloads-openshift-console.apps.<cluster_name>.<domain_name>
      alertmanager-main-openshift-monitoring.apps.<cluster_name>.<domain_name>
      grafana-openshift-monitoring.apps.<cluster_name>.<domain_name>
      prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<domain_name>
  2. Retrieve the Ingress Operator load balancer status and note the value of the external IP address that it uses, which is shown in the EXTERNAL-IP column:

    $ oc -n openshift-ingress get service router-default
    Example output
    NAME             TYPE           CLUSTER-IP      EXTERNAL-IP                            PORT(S)                      AGE
    router-default   LoadBalancer   172.30.62.215   ab3...28.us-east-2.elb.amazonaws.com   80:31499/TCP,443:30693/TCP   5m
  3. Locate the hosted zone ID for the load balancer:

    $ aws elb describe-load-balancers | jq -r '.LoadBalancerDescriptions[] | select(.DNSName == "<external_ip>").CanonicalHostedZoneNameID' (1)
    1 For <external_ip>, specify the value of the external IP address of the Ingress Operator load balancer that you obtained.
    Example output
    Z3AADJGX6KTTL2

    The output of this command is the load balancer hosted zone ID.

  4. Obtain the public hosted zone ID for your cluster’s domain:

    $ aws route53 list-hosted-zones-by-name \
                --dns-name "<domain_name>" \ (1)
                --query 'HostedZones[? Config.PrivateZone != `true` && Name == `<domain_name>.`].Id' (1)
                --output text
    1 For <domain_name>, specify the Route 53 base domain for your OpenShift Container Platform cluster.
    Example output
    /hostedzone/Z3URY6TWQ91KVV

    The public hosted zone ID for your domain is shown in the command output. In this example, it is Z3URY6TWQ91KVV.

  5. Add the alias records to your private zone:

    $ aws route53 change-resource-record-sets --hosted-zone-id "<private_hosted_zone_id>" --change-batch '{ (1)
    >   "Changes": [
    >     {
    >       "Action": "CREATE",
    >       "ResourceRecordSet": {
    >         "Name": "\\052.apps.<cluster_domain>", (2)
    >         "Type": "A",
    >         "AliasTarget":{
    >           "HostedZoneId": "<hosted_zone_id>", (3)
    >           "DNSName": "<external_ip>.", (4)
    >           "EvaluateTargetHealth": false
    >         }
    >       }
    >     }
    >   ]
    > }'
    1 For <private_hosted_zone_id>, specify the value from the output of the CloudFormation template for DNS and load balancing.
    2 For <cluster_domain>, specify the domain or subdomain that you use with your OpenShift Container Platform cluster.
    3 For <hosted_zone_id>, specify the public hosted zone ID for the load balancer that you obtained.
    4 For <external_ip>, specify the value of the external IP address of the Ingress Operator load balancer. Ensure that you include the trailing period (.) in this parameter value.
  6. Add the records to your public zone:

    $ aws route53 change-resource-record-sets --hosted-zone-id "<public_hosted_zone_id>"" --change-batch '{ (1)
    >   "Changes": [
    >     {
    >       "Action": "CREATE",
    >       "ResourceRecordSet": {
    >         "Name": "\\052.apps.<cluster_domain>", (2)
    >         "Type": "A",
    >         "AliasTarget":{
    >           "HostedZoneId": "<hosted_zone_id>", (3)
    >           "DNSName": "<external_ip>.", (4)
    >           "EvaluateTargetHealth": false
    >         }
    >       }
    >     }
    >   ]
    > }'
    1 For <public_hosted_zone_id>, specify the public hosted zone for your domain.
    2 For <cluster_domain>, specify the domain or subdomain that you use with your OpenShift Container Platform cluster.
    3 For <hosted_zone_id>, specify the public hosted zone ID for the load balancer that you obtained.
    4 For <external_ip>, specify the value of the external IP address of the Ingress Operator load balancer. Ensure that you include the trailing period (.) in this parameter value.

Completing an AWS installation on user-provisioned infrastructure

After you start the OpenShift Container Platform installation on Amazon Web Service (AWS) user-provisioned infrastructure, monitor the deployment to completion.

Prerequisites
  • You removed the bootstrap node for an OpenShift Container Platform cluster on user-provisioned AWS infrastructure.

  • You installed the oc CLI.

Procedure
  • From the directory that contains the installation program, complete the cluster installation:

    $ ./openshift-install --dir <installation_directory> wait-for install-complete (1)
    1 For <installation_directory>, specify the path to the directory that you stored the installation files in.
    Example output
    INFO Waiting up to 40m0s for the cluster at https://api.mycluster.example.com:6443 to initialize...
    INFO Waiting up to 10m0s for the openshift-console route to be created...
    INFO Install complete!
    INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
    INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
    INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Fe5en-ymBEc-Wt6NL"
    INFO Time elapsed: 1s
    • The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information.

    • It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.

Logging in to the cluster by using the web console

The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console.

Prerequisites
  • You have access to the installation host.

  • You completed a cluster installation and all cluster Operators are available.

Procedure
  1. Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host:

    $ cat <installation_directory>/auth/kubeadmin-password

    Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host.

  2. List the OpenShift Container Platform web console route:

    $ oc get routes -n openshift-console | grep 'console-openshift'

    Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host.

    Example output
    console     console-openshift-console.apps.<cluster_name>.<base_domain>            console     https   reencrypt/Redirect   None
  3. Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user.

Additional resources
  • See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console.

Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 4.8, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.

After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.

Additional resources

Additional resources

  • See Working with stacks in the AWS documentation for more information about AWS CloudFormation stacks.