$ aws configure
This is a checklist of prerequisites needed to create a Red Hat OpenShift Service on AWS (ROSA) classic cluster with STS.
This is a high level checklist and your implementation can vary.
Before running the installation process, verify that you deploy this from a machine that has access to:
The API services for the cloud to which you provision.
The hosts on the network that you provision.
The internet to obtain installation media.
Accounts and CLIs you must install to deploy the cluster.
Gather the following details:
AWS IAM User
AWS Access Key ID
AWS Secret Access Key
Ensure that you have the right permissions as detailed AWS managed IAM policies for ROSA and About IAM resources for ROSA clusters that use STS.
See Account for more details.
Install from AWS Command Line Interface if you have not already.
Configure the CLI:
aws configure in the terminal:
$ aws configure
Enter the AWS Access Key ID and press enter.
Enter the AWS Secret Access Key and press enter.
Enter the default region you want to deploy into.
Enter the output format you want, “table” or “json”.
Verify the output by running:
$ aws sts get-caller-identity
Ensure that the service role for ELB already exists by running:
$ aws iam get-role --role-name "AWSServiceRoleForElasticLoadBalancing"
If it does not exist, run:
$ aws iam create-service-linked-role --aws-service-name "elasticloadbalancing.amazonaws.com"
Create a Red Hat Hybrid Cloud Console account if you have not already.
Enable ROSA from your AWS account on the AWS console if you have not already.
Install the CLI from Installing the Red Hat OpenShift Service on AWS (ROSA) CLI, rosa or from the OpenShift console AWS console.
rosa login in a terminal, and this will prompt you to go to the token page through the console:
$ rosa login
Log in with your Red Hat account credentials.
Click the Load token button.
Copy the token and paste it back into the CLI prompt and press enter.
Alternatively, you can copy the full
$ rosa login --token=abc… command and paste that in the terminal:
$ rosa login --token=<abc..>
Verify your credentials by running:
$ rosa whoami
Ensure you have sufficient quota by running:
$ rosa verify quota
Once you have the above prerequisites installed and enabled, proceed to the next steps.
ROSA clusters are hosted in an AWS account within an AWS organizational unit. A service control policy (SCP) is created and applied to the AWS organizational unit that manages what services the AWS sub-accounts are permitted to access.
Ensure that your organization’s SCPs are not more restrictive than the roles and policies required by the cluster.
Ensure that your SCP is configured to allow the required
aws-marketplace:Subscribe permission when you choose Enable ROSA from the console, and see AWS Organizations service control policy (SCP) is denying required AWS Marketplace permissions for more details.
When you create a ROSA classic cluster, an associated AWS OpenID Connect (OIDC) identity provider is created.
This OIDC provider configuration relies on a public key that is located in the
us-east-1 AWS region.
Customers with AWS SCPs must allow the use of the
us-east-1 AWS region, even if these clusters are deployed in a different region.
Prerequisites needed from a networking standpoint.
Configure your firewall to allow access to the domains and ports listed in AWS firewall prerequisites.
When you create a cluster using an existing non-managed VPC, you can add additional custom security groups during cluster creation. Complete these prerequisites before you create the cluster:
Create the custom security groups in AWS before you create the cluster.
Associate the custom security groups with the VPC that you are using to create the cluster. Do not associate the custom security groups with any other VPC.
You may need to request additional AWS quota for
Security groups per network interface.
For more details see the detailed requirements for Security groups.
If you want to use custom DNS, then the ROSA installer must be able to use VPC DNS with default DHCP options so it can resolve hosts locally.
To do so, run
aws ec2 describe-dhcp-options and see if the VPC is using VPC Resolver:
$ aws ec2 describe-dhcp-options
Otherwise, the upstream DNS will need to forward the cluster scope to this VPC so the cluster can resolve internal IPs and services.
If you choose to deploy a PrivateLink cluster, then be sure to deploy the cluster in the pre-existing BYO VPC:
Create a public and private subnet for each AZ that your cluster uses.
Alternatively, implement transit gateway for internet and egress with appropriate routes.
The VPC’s CIDR block must contain the
Networking.MachineCIDR range, which is the IP address for cluster machines.
The subnet CIDR blocks must belong to the machine CIDR that you specify.
That way, the cluster can use the Route 53 zones that are attached to the VPC to resolve cluster internal DNS records.
Verify route tables by running:
---- $ aws ec2 describe-route-tables --filters "Name=vpc-id,Values=<vpc-id>" ----
Ensure that the cluster can egress either through NAT gateway in public subnet or through transit gateway.
Ensure whatever UDR you would like to follow is set up.
You can also configure a cluster-wide proxy during or after install. Configuring a cluster-wide proxy for more details.
You can install a non-PrivateLink ROSA cluster in a pre-existing BYO VPC.