{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:ModifyAvailabilityZoneGroup"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
You can quickly install an OpenShift Container Platform cluster on Amazon Web Services (AWS) Local Zones by setting the zone names in the edge compute pool of the install-config.yaml
file, or install a cluster in an existing Amazon Virtual Private Cloud (VPC) with Local Zone subnets.
AWS Local Zones is an infrastructure that place Cloud Resources close to metropolitan regions. For more information, see the AWS Local Zones Documentation.
You reviewed details about OpenShift Container Platform installation and update processes.
You are familiar with Selecting a cluster installation method and preparing it for users.
You configured an AWS account to host the cluster.
If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. |
You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or UNIX) in the AWS documentation.
If you use a firewall, you configured it to allow the sites that your cluster must access.
You noted the region and supported AWS Local Zones locations to create the network resources in.
You read the AWS Local Zones features in the AWS documentation.
You added permissions for creating network resources that support AWS Local Zones to the Identity and Access Management (IAM) user or role. The following example enables a zone group that can provide a user or role access for creating network network resources that support AWS Local Zones.
ec2:ModifyAvailabilityZoneGroup
permission attached to an IAM user or role.{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:ModifyAvailabilityZoneGroup"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
Read the following sections to understand infrastructure behaviors and cluster limitations in an AWS Local Zones environment.
Some limitations exist when you try to deploy a cluster with a default installation configuration in an Amazon Web Services (AWS) Local Zone.
The following list details limitations when deploying a cluster in a pre-configured AWS zone:
|
If you want the installation program to automatically create Local Zone subnets for your OpenShift Container Platform cluster, specific configuration limitations apply with this method.
The following configuration limitation applies when you set the installation program to automatically create subnets for your OpenShift Container Platform cluster:
|
Edge compute nodes are tainted compute nodes that run in AWS Local Zones locations.
When deploying a cluster that uses Local Zones, consider the following points:
Amazon EC2 instances in the Local Zones are more expensive than Amazon EC2 instances in the Availability Zones.
The latency is lower between the applications running in AWS Local Zones and the end user. A latency impact exists for some workloads if, for example, ingress traffic is mixed between Local Zones and Availability Zones.
Generally, the maximum transmission unit (MTU) between an Amazon EC2 instance in a Local Zones and an Amazon EC2 instance in the Region is 1300. The cluster network MTU must be always less than the EC2 MTU to account for the overhead. The specific overhead is determined by the network plugin. For example: OVN-Kubernetes has an overhead of The network plugin can provide additional features, such as IPsec, that also affect the MTU sizing. For more information, see How Local Zones work in the AWS documentation. |
OpenShift Container Platform 4.12 introduced a new compute pool, edge, that is designed for use in remote zones. The edge compute pool configuration is common between AWS Local Zones locations. Because of the type and size limitations of resources like EC2 and EBS on Local Zones resources, the default instance type can vary from the traditional compute pool.
The default Elastic Block Store (EBS) for Local Zones locations is gp2
, which differs from the non-edge compute pool. The instance type used for each Local Zones on an edge compute pool also might differ from other compute pools, depending on the instance offerings on the zone.
The edge compute pool creates new labels that developers can use to deploy applications onto AWS Local Zones nodes. The new labels are:
node-role.kubernetes.io/edge=''
machine.openshift.io/zone-type=local-zone
machine.openshift.io/zone-group=$ZONE_GROUP_NAME
By default, the machine sets for the edge compute pool define the taint of NoSchedule
to prevent other workloads from spreading on Local Zones instances. Users can only run user workloads if they define tolerations in the pod specification.
Before you install a cluster in an AWS Local Zones environment, you must configure your infrastructure so that it can adopt Local Zone capabilities.
If you plan to create subnets in AWS Local Zones, you must opt in to each zone group separately.
You have installed the AWS CLI.
You have determined an AWS Region for where you want to deploy your OpenShift Container Platform cluster.
You have attached a permissive IAM policy to a user or role account that opts in to the zone group.
List the zones that are available in your AWS Region by running the following command:
$ aws --region "<value_of_AWS_Region>" ec2 describe-availability-zones \
--query 'AvailabilityZones[].[{ZoneName: ZoneName, GroupName: GroupName, Status: OptInStatus}]' \
--filters Name=zone-type,Values=local-zone \
--all-availability-zones
Depending on the AWS Region, the list of available zones might be long. The command returns the following fields:
ZoneName
The name of the Local Zones.
GroupName
The group that comprises the zone. To opt in to the Region, save the name.
Status
The status of the Local Zones group. If the status is not-opted-in
, you must opt in the GroupName
as described in the next step.
Opt in to the zone group on your AWS account by running the following command:
$ aws ec2 modify-availability-zone-group \
--group-name "<value_of_GroupName>" \(1)
--opt-in-status opted-in
1 | Replace <value_of_GroupName> with the name of the group of the Local Zones where you want to create subnets.
For example, specify us-east-1-nyc-1 to use the zone us-east-1-nyc-1a (US East New York). |
In OpenShift Container Platform 4.15, you require access to the internet to install your cluster.
You must have internet access to:
Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
Access Quay.io to obtain the packages that are required to install your cluster.
Obtain the packages that are required to perform cluster updates.
If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. |
If you are deploying an OpenShift Container Platform cluster using an AWS Marketplace image, you must first subscribe through AWS. Subscribing to the offer provides you with the AMI ID that the installation program uses to deploy compute nodes.
You have an AWS account to purchase the offer. This account does not have to be the same account that is used to install the cluster.
Complete the OpenShift Container Platform subscription from the AWS Marketplace.
Record the AMI ID for your specific AWS Region. As part of the installation process, you must update the install-config.yaml
file with this value before deploying the cluster.
install-config.yaml
file with AWS Marketplace compute nodesapiVersion: v1
baseDomain: example.com
compute:
- hyperthreading: Enabled
name: worker
platform:
aws:
amiID: ami-06c4d345f7c207239 (1)
type: m5.4xlarge
replicas: 3
metadata:
name: test-cluster
platform:
aws:
region: us-east-2 (2)
sshKey: ssh-ed25519 AAAA...
pullSecret: '{"auths": ...}'
1 | The AMI ID from your AWS Marketplace subscription. |
2 | Your AMI ID is associated with a specific AWS Region. When creating the installation configuration file, ensure that you select the same AWS Region that you specified when configuring your subscription. |
You can install the OpenShift CLI (oc
) to interact with
OpenShift Container Platform
from a command-line interface. You can install oc
on Linux, Windows, or macOS.
If you installed an earlier version of |
You can install the OpenShift CLI (oc
) binary on Linux by using the following procedure.
Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
Select the architecture from the Product Variant drop-down list.
Select the appropriate version from the Version drop-down list.
Click Download Now next to the OpenShift v4.15 Linux Client entry and save the file.
Unpack the archive:
$ tar xvf <file>
Place the oc
binary in a directory that is on your PATH
.
To check your PATH
, execute the following command:
$ echo $PATH
After you install the OpenShift CLI, it is available using the oc
command:
$ oc <command>
You can install the OpenShift CLI (oc
) binary on Windows by using the following procedure.
Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
Select the appropriate version from the Version drop-down list.
Click Download Now next to the OpenShift v4.15 Windows Client entry and save the file.
Unzip the archive with a ZIP program.
Move the oc
binary to a directory that is on your PATH
.
To check your PATH
, open the command prompt and execute the following command:
C:\> path
After you install the OpenShift CLI, it is available using the oc
command:
C:\> oc <command>
You can install the OpenShift CLI (oc
) binary on macOS by using the following procedure.
Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
Select the appropriate version from the Version drop-down list.
Click Download Now next to the OpenShift v4.15 macOS Client entry and save the file.
For macOS arm64, choose the OpenShift v4.15 macOS arm64 Client entry. |
Unpack and unzip the archive.
Move the oc
binary to a directory on your PATH.
To check your PATH
, open a terminal and execute the following command:
$ echo $PATH
Verify your installation by using an oc
command:
$ oc <command>
Before you install OpenShift Container Platform, download the installation file on the host you are using for installation.
You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space.
Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
Select your infrastructure provider from the Run it yourself section of the page.
Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer.
Place the downloaded file in the directory where you want to store the installation configuration files.
|
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
$ tar -xvf openshift-install-linux.tar.gz
Download your installation pull secret from Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
Alternatively, you can retrieve the installation program from the Red Hat Customer Portal, where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. |
During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys
list for the core
user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core
. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather
command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required. |
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs. |
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name> (1)
1 | Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. |
If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the |
View the public SSH key:
$ cat <path>/<file_name>.pub
For example, run the following to view the ~/.ssh/id_ed25519.pub
public key:
$ cat ~/.ssh/id_ed25519.pub
Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather
command.
On some distributions, default SSH private key identities such as |
If the ssh-agent
process is not already running for your local user, start it as a background task:
$ eval "$(ssh-agent -s)"
Agent pid 31874
If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. |
Add your SSH private key to the ssh-agent
:
$ ssh-add <path>/<file_name> (1)
1 | Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 |
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
When you install OpenShift Container Platform, provide the SSH public key to the installation program.
Before you extend nodes to Local Zones, you must prepare certain resources for the cluster installation environment.
Each cluster machine must meet the following minimum requirements:
Machine | Operating System | vCPU [1] | Virtual RAM | Storage | Input/Output Per Second (IOPS)[2] |
---|---|---|---|---|---|
Bootstrap |
RHCOS |
4 |
16 GB |
100 GB |
300 |
Control plane |
RHCOS |
4 |
16 GB |
100 GB |
300 |
Compute |
RHCOS, RHEL 8.6 and later [3] |
2 |
8 GB |
100 GB |
300 |
One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core × cores) × sockets = vCPUs.
OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance.
As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later.
As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires:
For more information, see RHEL Architectures. |
If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform.
The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform for use with AWS Local Zones.
Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in the section named "Minimum resource requirements for cluster installation". |
c5.*
c5d.*
m6i.*
m5.*
r5.*
t3.*
See AWS Local Zones features in the AWS documentation.
Generate and customize the installation configuration file that the installation program needs to deploy your cluster.
You obtained the OpenShift Container Platform installation program for user-provisioned infrastructure and the pull secret for your cluster.
You checked that you are deploying your cluster to an AWS Region with an accompanying Red Hat Enterprise Linux CoreOS (RHCOS) AMI published by Red Hat. If you are deploying to an AWS Region that requires a custom AMI, such as an AWS GovCloud Region, you must create the install-config.yaml
file manually.
Create the install-config.yaml
file.
Change to the directory that contains the installation program and run the following command:
$ ./openshift-install create install-config --dir <installation_directory> (1)
1 | For <installation_directory> , specify the directory name to store the
files that the installation program creates. |
Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. |
At the prompts, provide the configuration details for your cloud:
Optional: Select an SSH key to use to access your cluster machines.
For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your |
Select aws as the platform to target.
If you do not have an AWS profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program.
The AWS access key ID and secret access key are stored in |
Select the AWS Region to deploy the cluster to.
Select the base domain for the Route 53 service that you configured for your cluster.
Enter a descriptive name for your cluster.
Paste the pull secret from Red Hat OpenShift Cluster Manager.
Optional: Back up the install-config.yaml
file.
The |
The following examples show install-config.yaml
files that contain an edge machine pool configuration.
apiVersion: v1
baseDomain: devcluster.openshift.com
metadata:
name: ipi-edgezone
compute:
- name: edge
platform:
aws:
type: r5.2xlarge
platform:
aws:
region: us-west-2
pullSecret: '{"auths": ...}'
sshKey: ssh-ed25519 AAAA...
Instance types differ between locations. To verify availability in the Local Zones in which the cluster runs, see the AWS documentation.
apiVersion: v1
baseDomain: devcluster.openshift.com
metadata:
name: ipi-edgezone
compute:
- name: edge
platform:
aws:
zones:
- us-west-2-lax-1a
- us-west-2-lax-1b
- us-west-2-phx-2a
rootVolume:
type: gp3
size: 120
platform:
aws:
region: us-west-2
pullSecret: '{"auths": ...}'
sshKey: ssh-ed25519 AAAA...
Elastic Block Storage (EBS) types differ between locations. Check the AWS documentation to verify availability in the Local Zones in which the cluster runs.
apiVersion: v1
baseDomain: devcluster.openshift.com
metadata:
name: ipi-edgezone
compute:
- name: edge
platform:
aws:
additionalSecurityGroupIDs:
- sg-1 (1)
- sg-2
platform:
aws:
region: us-west-2
pullSecret: '{"auths": ...}'
sshKey: ssh-ed25519 AAAA...
1 | Specify the name of the security group as it is displayed on the Amazon EC2 console. Ensure that you include the sg prefix. |
Before you deploy a cluster on AWS, you can customize the cluster network maximum transmission unit (MTU) for your cluster network to meet the needs of your infrastructure.
By default, when you install a cluster with supported Local Zones capabilities, the MTU value for the cluster network is automatically adjusted to the lowest value that the network plugin accepts.
Setting an unsupported MTU value for EC2 instances that operate in the Local Zones infrastructure can cause issues for your OpenShift Container Platform cluster. |
If the Local Zone supports higher MTU values in between EC2 instances in the Local Zone and the AWS Region, you can manually configure the higher value to increase the network performance of the cluster network.
You can customize the MTU for a cluster by specifying the networking.clusterNetworkMTU
parameter in the install-config.yaml
configuration file.
All subnets in Local Zones must support the higher MTU value, so that each node in that zone can successfully communicate with services in the AWS Region and deploy your workloads. |
apiVersion: v1
baseDomain: devcluster.openshift.com
metadata:
name: edge-zone
networking:
clusterNetworkMTU: 8901
compute:
- name: edge
platform:
aws:
zones:
- us-west-2-lax-1a
- us-west-2-lax-1b
platform:
aws:
region: us-west-2
pullSecret: '{"auths": ...}'
sshKey: ssh-ed25519 AAAA...
For more information about the maximum supported maximum transmission unit (MTU) value, see AWS resources supported in Local Zones in the AWS documentation.
Choose one of the following installation options to install an OpenShift Container Platform cluster on AWS with edge compute nodes defined in Local Zones:
Fully automated option: Installing a cluster to quickly extend compute nodes to edge compute pools, where the installation program automatically creates infrastructure resources for the OpenShift Container Platform cluster.
Existing VPC option: Installing a cluster on AWS into an existing VPC, where you supply Local Zones subnets to the install-config.yaml
file.
Choose one of the following options to install an OpenShift Container Platform cluster in an AWS Local Zones environment:
For OpenShift Container Platform 4.15, you can quickly install a cluster on Amazon Web Services (AWS) to extend compute nodes to Local Zones locations. By using this installation route, the installation program automatically creates network resources and Local Zones subnets for each zone that you defined in your configuration file. To customize the installation, you must modify parameters in the install-config.yaml
file before you deploy the cluster.
Modify an install-config.yaml
file to include AWS Local Zones.
You have configured an AWS account.
You added your AWS keys and AWS Region to your local AWS profile by running aws configure
.
You are familiar with the configuration limitations that apply when you specify the installation program to automatically create subnets for your OpenShift Container Platform cluster.
You opted in to the Local Zones group for each zone.
You created an install-config.yaml
file by using the procedure "Creating the installation configuration file".
Modify the install-config.yaml
file by specifying Local Zones names in the platform.aws.zones
property of the edge compute pool.
# ...
platform:
aws:
region: <region_name> (1)
compute:
- name: edge
platform:
aws:
zones: (2)
- <local_zone_name>
#...
1 | The AWS Region name. |
2 | The list of Local Zones names that you use must exist in the same AWS Region specified in the platform.aws.region field. |
us-west-2
AWS Region that extends edge nodes to Local Zones in Los Angeles
and Las Vegas
locationsapiVersion: v1
baseDomain: example.com
metadata:
name: cluster-name
platform:
aws:
region: us-west-2
compute:
- name: edge
platform:
aws:
zones:
- us-west-2-lax-1a
- us-west-2-lax-1b
- us-west-2-las-1a
pullSecret: '{"auths": ...}'
sshKey: 'ssh-ed25519 AAAA...'
#...
Deploy your cluster.
You can install a cluster into an existing Amazon Virtual Private Cloud (VPC) on Amazon Web Services (AWS). The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, modify parameters in the install-config.yaml
file before you install the cluster.
Installing a cluster on AWS into an existing VPC requires extending compute nodes to the edge of the Cloud Infrastructure by using AWS Local Zones.
Local Zone subnets extend regular compute nodes to edge networks. Each edge compute nodes runs a user workload. After you create an Amazon Web Service (AWS) Local Zone environment, and you deploy your cluster, you can use edge compute nodes to create user workloads in Local Zone subnets.
If you want to create private subnets, you must either modify the provided CloudFormation template or create your own template. |
You can use a provided CloudFormation template to create network resources. Additionally, you can modify a template to customize your infrastructure or use the information that they contain to create AWS resources according to your company’s policies.
The steps for performing an installer-provisioned infrastructure installation are provided for example purposes only. Installing a cluster in an existing VPC requires that you have knowledge of the cloud provider and the installation process of OpenShift Container Platform. You can use a CloudFormation template to assist you with completing these steps or to help model your own cluster installation. Instead of using the CloudFormation template to create resources, you can decide to use other methods for generating these resources. |
You can create a Virtual Private Cloud (VPC), and subnets for all Local Zones locations, in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to extend compute nodes to edge locations. You can further customize your VPC to meet your requirements, including a VPN and route tables. You can also add new Local Zones subnets not included at initial deployment.
You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the VPC.
If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. |
You configured an AWS account.
You added your AWS keys and AWS Region to your local AWS profile by running aws configure
.
You opted in to the AWS Local Zones on your AWS account.
Create a JSON file that contains the parameter values that the CloudFormation template requires:
[
{
"ParameterKey": "VpcCidr", (1)
"ParameterValue": "10.0.0.0/16" (2)
},
{
"ParameterKey": "AvailabilityZoneCount", (3)
"ParameterValue": "3" (4)
},
{
"ParameterKey": "SubnetBits", (5)
"ParameterValue": "12" (6)
}
]
1 | The CIDR block for the VPC. |
2 | Specify a CIDR block in the format x.x.x.x/16-24 . |
3 | The number of availability zones to deploy the VPC in. |
4 | Specify an integer between 1 and 3 . |
5 | The size of each subnet in each availability zone. |
6 | Specify an integer between 5 and 13 , where 5 is /27 and 13 is /19 . |
Go to the section of the documentation named "CloudFormation template for the VPC", and then copy the syntax from the provided template. Save the copied template syntax as a YAML file on your local system. This template describes the VPC that your cluster requires.
Launch the CloudFormation template to create a stack of AWS resources that represent the VPC by running the following command:
You must enter the command on a single line. |
$ aws cloudformation create-stack --stack-name <name> \(1)
--template-body file://<template>.yaml \(2)
--parameters file://<parameters>.json (3)
1 | <name> is the name for the CloudFormation stack, such as cluster-vpc .
You need the name of this stack if you remove the cluster. |
2 | <template> is the relative path to and name of the CloudFormation template
YAML file that you saved. |
3 | <parameters> is the relative path and the name of the CloudFormation
parameters JSON file. |
arn:aws:cloudformation:us-east-1:123456789012:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f
Confirm that the template components exist by running the following command:
$ aws cloudformation describe-stacks --stack-name <name>
After the StackStatus
displays CREATE_COMPLETE
, the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster.
VpcId
|
The ID of your VPC. |
PublicSubnetIds
|
The IDs of the new public subnets. |
PrivateSubnetIds
|
The IDs of the new private subnets. |
PublicRouteTableId
|
The ID of the new public route table ID. |
You can use the following CloudFormation template to deploy the VPC that you need for your OpenShift Container Platform cluster.
AWSTemplateFormatVersion: 2010-09-09
Description: Template for Best Practice VPC with 1-3 AZs
Parameters:
VpcCidr:
AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))$
ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24.
Default: 10.0.0.0/16
Description: CIDR block for VPC.
Type: String
AvailabilityZoneCount:
ConstraintDescription: "The number of availability zones. (Min: 1, Max: 3)"
MinValue: 1
MaxValue: 3
Default: 1
Description: "How many AZs to create VPC subnets for. (Min: 1, Max: 3)"
Type: Number
SubnetBits:
ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27.
MinValue: 5
MaxValue: 13
Default: 12
Description: "Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)"
Type: Number
Metadata:
AWS::CloudFormation::Interface:
ParameterGroups:
- Label:
default: "Network Configuration"
Parameters:
- VpcCidr
- SubnetBits
- Label:
default: "Availability Zones"
Parameters:
- AvailabilityZoneCount
ParameterLabels:
AvailabilityZoneCount:
default: "Availability Zone Count"
VpcCidr:
default: "VPC CIDR"
SubnetBits:
default: "Bits Per Subnet"
Conditions:
DoAz3: !Equals [3, !Ref AvailabilityZoneCount]
DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3]
Resources:
VPC:
Type: "AWS::EC2::VPC"
Properties:
EnableDnsSupport: "true"
EnableDnsHostnames: "true"
CidrBlock: !Ref VpcCidr
PublicSubnet:
Type: "AWS::EC2::Subnet"
Properties:
VpcId: !Ref VPC
CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]]
AvailabilityZone: !Select
- 0
- Fn::GetAZs: !Ref "AWS::Region"
PublicSubnet2:
Type: "AWS::EC2::Subnet"
Condition: DoAz2
Properties:
VpcId: !Ref VPC
CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]]
AvailabilityZone: !Select
- 1
- Fn::GetAZs: !Ref "AWS::Region"
PublicSubnet3:
Type: "AWS::EC2::Subnet"
Condition: DoAz3
Properties:
VpcId: !Ref VPC
CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]]
AvailabilityZone: !Select
- 2
- Fn::GetAZs: !Ref "AWS::Region"
InternetGateway:
Type: "AWS::EC2::InternetGateway"
GatewayToInternet:
Type: "AWS::EC2::VPCGatewayAttachment"
Properties:
VpcId: !Ref VPC
InternetGatewayId: !Ref InternetGateway
PublicRouteTable:
Type: "AWS::EC2::RouteTable"
Properties:
VpcId: !Ref VPC
PublicRoute:
Type: "AWS::EC2::Route"
DependsOn: GatewayToInternet
Properties:
RouteTableId: !Ref PublicRouteTable
DestinationCidrBlock: 0.0.0.0/0
GatewayId: !Ref InternetGateway
PublicSubnetRouteTableAssociation:
Type: "AWS::EC2::SubnetRouteTableAssociation"
Properties:
SubnetId: !Ref PublicSubnet
RouteTableId: !Ref PublicRouteTable
PublicSubnetRouteTableAssociation2:
Type: "AWS::EC2::SubnetRouteTableAssociation"
Condition: DoAz2
Properties:
SubnetId: !Ref PublicSubnet2
RouteTableId: !Ref PublicRouteTable
PublicSubnetRouteTableAssociation3:
Condition: DoAz3
Type: "AWS::EC2::SubnetRouteTableAssociation"
Properties:
SubnetId: !Ref PublicSubnet3
RouteTableId: !Ref PublicRouteTable
PrivateSubnet:
Type: "AWS::EC2::Subnet"
Properties:
VpcId: !Ref VPC
CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]]
AvailabilityZone: !Select
- 0
- Fn::GetAZs: !Ref "AWS::Region"
PrivateRouteTable:
Type: "AWS::EC2::RouteTable"
Properties:
VpcId: !Ref VPC
PrivateSubnetRouteTableAssociation:
Type: "AWS::EC2::SubnetRouteTableAssociation"
Properties:
SubnetId: !Ref PrivateSubnet
RouteTableId: !Ref PrivateRouteTable
NAT:
DependsOn:
- GatewayToInternet
Type: "AWS::EC2::NatGateway"
Properties:
AllocationId:
"Fn::GetAtt":
- EIP
- AllocationId
SubnetId: !Ref PublicSubnet
EIP:
Type: "AWS::EC2::EIP"
Properties:
Domain: vpc
Route:
Type: "AWS::EC2::Route"
Properties:
RouteTableId:
Ref: PrivateRouteTable
DestinationCidrBlock: 0.0.0.0/0
NatGatewayId:
Ref: NAT
PrivateSubnet2:
Type: "AWS::EC2::Subnet"
Condition: DoAz2
Properties:
VpcId: !Ref VPC
CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]]
AvailabilityZone: !Select
- 1
- Fn::GetAZs: !Ref "AWS::Region"
PrivateRouteTable2:
Type: "AWS::EC2::RouteTable"
Condition: DoAz2
Properties:
VpcId: !Ref VPC
PrivateSubnetRouteTableAssociation2:
Type: "AWS::EC2::SubnetRouteTableAssociation"
Condition: DoAz2
Properties:
SubnetId: !Ref PrivateSubnet2
RouteTableId: !Ref PrivateRouteTable2
NAT2:
DependsOn:
- GatewayToInternet
Type: "AWS::EC2::NatGateway"
Condition: DoAz2
Properties:
AllocationId:
"Fn::GetAtt":
- EIP2
- AllocationId
SubnetId: !Ref PublicSubnet2
EIP2:
Type: "AWS::EC2::EIP"
Condition: DoAz2
Properties:
Domain: vpc
Route2:
Type: "AWS::EC2::Route"
Condition: DoAz2
Properties:
RouteTableId:
Ref: PrivateRouteTable2
DestinationCidrBlock: 0.0.0.0/0
NatGatewayId:
Ref: NAT2
PrivateSubnet3:
Type: "AWS::EC2::Subnet"
Condition: DoAz3
Properties:
VpcId: !Ref VPC
CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]]
AvailabilityZone: !Select
- 2
- Fn::GetAZs: !Ref "AWS::Region"
PrivateRouteTable3:
Type: "AWS::EC2::RouteTable"
Condition: DoAz3
Properties:
VpcId: !Ref VPC
PrivateSubnetRouteTableAssociation3:
Type: "AWS::EC2::SubnetRouteTableAssociation"
Condition: DoAz3
Properties:
SubnetId: !Ref PrivateSubnet3
RouteTableId: !Ref PrivateRouteTable3
NAT3:
DependsOn:
- GatewayToInternet
Type: "AWS::EC2::NatGateway"
Condition: DoAz3
Properties:
AllocationId:
"Fn::GetAtt":
- EIP3
- AllocationId
SubnetId: !Ref PublicSubnet3
EIP3:
Type: "AWS::EC2::EIP"
Condition: DoAz3
Properties:
Domain: vpc
Route3:
Type: "AWS::EC2::Route"
Condition: DoAz3
Properties:
RouteTableId:
Ref: PrivateRouteTable3
DestinationCidrBlock: 0.0.0.0/0
NatGatewayId:
Ref: NAT3
S3Endpoint:
Type: AWS::EC2::VPCEndpoint
Properties:
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal: '*'
Action:
- '*'
Resource:
- '*'
RouteTableIds:
- !Ref PublicRouteTable
- !Ref PrivateRouteTable
- !If [DoAz2, !Ref PrivateRouteTable2, !Ref "AWS::NoValue"]
- !If [DoAz3, !Ref PrivateRouteTable3, !Ref "AWS::NoValue"]
ServiceName: !Join
- ''
- - com.amazonaws.
- !Ref 'AWS::Region'
- .s3
VpcId: !Ref VPC
Outputs:
VpcId:
Description: ID of the new VPC.
Value: !Ref VPC
PublicSubnetIds:
Description: Subnet IDs of the public subnets.
Value:
!Join [
",",
[!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PublicSubnet3, !Ref "AWS::NoValue"]]
]
PrivateSubnetIds:
Description: Subnet IDs of the private subnets.
Value:
!Join [
",",
[!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PrivateSubnet3, !Ref "AWS::NoValue"]]
]
PublicRouteTableId:
Description: Public Route table ID
Value: !Ref PublicRouteTable
PrivateRouteTableIds:
Description: Private Route table IDs
Value:
!Join [
",",
[
!Join ["=", [
!Select [0, "Fn::GetAZs": !Ref "AWS::Region"],
!Ref PrivateRouteTable
]],
!If [DoAz2,
!Join ["=", [!Select [1, "Fn::GetAZs": !Ref "AWS::Region"], !Ref PrivateRouteTable2]],
!Ref "AWS::NoValue"
],
!If [DoAz3,
!Join ["=", [!Select [2, "Fn::GetAZs": !Ref "AWS::Region"], !Ref PrivateRouteTable3]],
!Ref "AWS::NoValue"
]
]
]
Before you configure a machine set for edge compute nodes in your OpenShift Container Platform cluster, you must create the subnets in Local Zones. Complete the following procedure for each Local Zone that you want to deploy compute nodes to.
You can use the provided CloudFormation template and create a CloudFormation stack. You can then use this stack to custom provision a subnet.
If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. |
You configured an AWS account.
You added your AWS keys and region to your local AWS profile by running aws configure
.
You opted in to the Local Zones group.
Go to the section of the documentation named "CloudFormation template for the VPC subnet", and copy the syntax from the template. Save the copied template syntax as a YAML file on your local system. This template describes the VPC that your cluster requires.
Run the following command to deploy the CloudFormation template, which creates a stack of AWS resources that represent the VPC:
$ aws cloudformation create-stack --stack-name <stack_name> \(1)
--region ${CLUSTER_REGION} \
--template-body file://<template>.yaml \(2)
--parameters \
ParameterKey=VpcId,ParameterValue="${VPC_ID}" \(3)
ParameterKey=ClusterName,ParameterValue="${CLUSTER_NAME}" \(4)
ParameterKey=ZoneName,ParameterValue="${ZONE_NAME}" \(5)
ParameterKey=PublicRouteTableId,ParameterValue="${ROUTE_TABLE_PUB}" \(6)
ParameterKey=PublicSubnetCidr,ParameterValue="${SUBNET_CIDR_PUB}" \(7)
ParameterKey=PrivateRouteTableId,ParameterValue="${ROUTE_TABLE_PVT}" \(8)
ParameterKey=PrivateSubnetCidr,ParameterValue="${SUBNET_CIDR_PVT}" (9)
1 | <stack_name> is the name for the CloudFormation stack, such as cluster-wl-<local_zone_shortname> . You need the name of this stack if you remove the cluster. |
2 | <template> is the relative path and the name of the CloudFormation template
YAML file that you saved. |
3 | ${VPC_ID} is the VPC ID, which is the value VpcID in the output of the CloudFormation template for the VPC. |
4 | ${ZONE_NAME} is the value of Local Zones name to create the subnets. |
5 | ${CLUSTER_NAME} is the value of ClusterName to be used as a prefix of the new AWS resource names. |
6 | ${SUBNET_CIDR_PUB} is a valid CIDR block that is used to create the public subnet. This block must be part of the VPC CIDR block VpcCidr . |
7 | ${ROUTE_TABLE_PVT} is the PrivateRouteTableId extracted from the output of the VPC’s CloudFormation stack. |
8 | ${SUBNET_CIDR_PVT} is a valid CIDR block that is used to create the private subnet. This block must be part of the VPC CIDR block VpcCidr . |
arn:aws:cloudformation:us-east-1:123456789012:stack/<stack_name>/dbedae40-820e-11eb-2fd3-12a48460849f
Confirm that the template components exist by running the following command:
$ aws cloudformation describe-stacks --stack-name <stack_name>
After the StackStatus
displays CREATE_COMPLETE
, the output displays values for the following parameters. Ensure that you provide these parameter values to the other CloudFormation templates that you run to create for your cluster.
PublicSubnetId
|
The IDs of the public subnet created by the CloudFormation stack. |
PrivateSubnetId
|
The IDs of the private subnet created by the CloudFormation stack. |
You can use the following CloudFormation template to deploy the private and public subnets in a zone on Local Zones infrastructure.
AWSTemplateFormatVersion: 2010-09-09
Description: Template for Best Practice Subnets (Public and Private)
Parameters:
VpcId:
Description: VPC ID that comprises all the target subnets.
Type: String
AllowedPattern: ^(?:(?:vpc)(?:-[a-zA-Z0-9]+)?\b|(?:[0-9]{1,3}\.){3}[0-9]{1,3})$
ConstraintDescription: VPC ID must be with valid name, starting with vpc-.*.
ClusterName:
Description: Cluster name or prefix name to prepend the Name tag for each subnet.
Type: String
AllowedPattern: ".+"
ConstraintDescription: ClusterName parameter must be specified.
ZoneName:
Description: Zone Name to create the subnets, such as us-west-2-lax-1a.
Type: String
AllowedPattern: ".+"
ConstraintDescription: ZoneName parameter must be specified.
PublicRouteTableId:
Description: Public Route Table ID to associate the public subnet.
Type: String
AllowedPattern: ".+"
ConstraintDescription: PublicRouteTableId parameter must be specified.
PublicSubnetCidr:
AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))$
ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24.
Default: 10.0.128.0/20
Description: CIDR block for public subnet.
Type: String
PrivateRouteTableId:
Description: Private Route Table ID to associate the private subnet.
Type: String
AllowedPattern: ".+"
ConstraintDescription: PrivateRouteTableId parameter must be specified.
PrivateSubnetCidr:
AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))$
ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24.
Default: 10.0.128.0/20
Description: CIDR block for private subnet.
Type: String
Resources:
PublicSubnet:
Type: "AWS::EC2::Subnet"
Properties:
VpcId: !Ref VpcId
CidrBlock: !Ref PublicSubnetCidr
AvailabilityZone: !Ref ZoneName
Tags:
- Key: Name
Value: !Join ['-', [!Ref ClusterName, "public", !Ref ZoneName]]
PublicSubnetRouteTableAssociation:
Type: "AWS::EC2::SubnetRouteTableAssociation"
Properties:
SubnetId: !Ref PublicSubnet
RouteTableId: !Ref PublicRouteTableId
PrivateSubnet:
Type: "AWS::EC2::Subnet"
Properties:
VpcId: !Ref VpcId
CidrBlock: !Ref PrivateSubnetCidr
AvailabilityZone: !Ref ZoneName
Tags:
- Key: Name
Value: !Join ['-', [!Ref ClusterName, "private", !Ref ZoneName]]
PrivateSubnetRouteTableAssociation:
Type: "AWS::EC2::SubnetRouteTableAssociation"
Properties:
SubnetId: !Ref PrivateSubnet
RouteTableId: !Ref PrivateRouteTableId
Outputs:
PublicSubnetId:
Description: Subnet ID of the public subnets.
Value:
!Join ["", [!Ref PublicSubnet]]
PrivateSubnetId:
Description: Subnet ID of the private subnets.
Value:
!Join ["", [!Ref PrivateSubnet]]
You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console.
Modify your install-config.yaml
file to include Local Zones subnets.
You created subnets by using the procedure "Creating subnets in Local Zones".
You created an install-config.yaml
file by using the procedure "Creating the installation configuration file".
Modify the install-config.yaml
configuration file by specifying Local Zones subnets in the platform.aws.subnets
parameter.
# ...
platform:
aws:
region: us-west-2
subnets: (1)
- publicSubnetId-1
- publicSubnetId-2
- publicSubnetId-3
- privateSubnetId-1
- privateSubnetId-2
- privateSubnetId-3
- publicSubnetId-LocalZone-1
# ...
1 | List of subnet IDs created in the zones: Availability and Local Zones. |
For more information about viewing the CloudFormation stacks that you created, see AWS CloudFormation console.
For more information about AWS profile and credential configuration, see Configuration and credential file settings in the AWS documentation.
By default, the installation program creates and attaches security groups to control plane and compute machines. The rules associated with the default security groups cannot be modified.
However, you can apply additional existing AWS security groups, which are associated with your existing VPC, to control plane and compute machines. Applying custom security groups can help you meet the security needs of your organization, in such cases where you need to control the incoming or outgoing traffic of these machines.
As part of the installation process, you apply custom security groups by modifying the install-config.yaml
file before deploying the cluster.
For more information, see "Edge compute pools and AWS Local Zones".
If your workload requires deploying the edge compute nodes in public subnets on Local Zones infrastructure, you can configure the machine set manifests when installing a cluster.
AWS Local Zones infrastructure accesses the network traffic in a specified zone, so applications can take advantage of lower latency when serving end users that are closer to that zone.
The default setting that deploys compute nodes in private subnets might not meet your needs, so consider creating edge compute nodes in public subnets when you want to apply more customization to your infrastructure.
By default, OpenShift Container Platform deploy the compute nodes in private subnets. For best performance, consider placing compute nodes in subnets that have their Public IP addresses attached to the subnets. You must create additional security groups, but ensure that you only open the groups' rules over the internet when you really need to. |
Change to the directory that contains the installation program and generate the manifest files. Ensure that the installation manifests get created at the openshift
and manifests
directory level.
$ ./openshift-install create manifests --dir <installation_directory>
Edit the machine set manifest that the installation program generates for the Local Zones, so that the manifest gets deployed in public subnets. Specify true
for the spec.template.spec.providerSpec.value.publicIP
parameter.
spec:
template:
spec:
providerSpec:
value:
publicIp: true
subnet:
filters:
- name: tag:Name
values:
- ${INFRA_ID}-public-${ZONE_NAME}
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
name: <infrastructure_id>-edge-<zone>
namespace: openshift-machine-api
spec:
template:
spec:
providerSpec:
value:
publicIp: true
You can install OpenShift Container Platform on a compatible cloud platform.
You can run the |
You have configured an account with the cloud platform that hosts your cluster.
You have the OpenShift Container Platform installation program and the pull secret for your cluster.
You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions.
Change to the directory that contains the installation program and initialize the cluster deployment:
$ ./openshift-install create cluster --dir <installation_directory> \ (1)
--log-level=info (2)
1 | For <installation_directory> , specify the
location of your customized ./install-config.yaml file. |
2 | To view different installation details, specify warn , debug , or
error instead of info . |
Optional: Remove or disable the AdministratorAccess
policy from the IAM
account that you used to install the cluster.
The elevated permissions provided by the |
When the cluster deployment completes successfully:
The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin
user.
Credential information also outputs to <installation_directory>/.openshift_install.log
.
Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. |
...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
INFO Login to the console with user: "kubeadmin", and password: "password"
INFO Time elapsed: 36m22s
|
Verify that your OpenShift Container Platform successfully deployed on AWS Local Zones.
You can log in to your cluster as a default system user by exporting the cluster kubeconfig
file.
The kubeconfig
file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server.
The file is specific to a cluster and is created during OpenShift Container Platform installation.
You deployed an OpenShift Container Platform cluster.
You installed the oc
CLI.
Export the kubeadmin
credentials:
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig (1)
1 | For <installation_directory> , specify the path to the directory that you stored
the installation files in. |
Verify you can run oc
commands successfully using the exported configuration:
$ oc whoami
system:admin
The kubeadmin
user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin
user by using the OpenShift Container Platform web console.
You have access to the installation host.
You completed a cluster installation and all cluster Operators are available.
Obtain the password for the kubeadmin
user from the kubeadmin-password
file on the installation host:
$ cat <installation_directory>/auth/kubeadmin-password
Alternatively, you can obtain the |
List the OpenShift Container Platform web console route:
$ oc get routes -n openshift-console | grep 'console-openshift'
Alternatively, you can obtain the OpenShift Container Platform route from the |
console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None
Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin
user.
For more information about accessing and understanding the OpenShift Container Platform web console, see Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console.
After you install a cluster that uses AWS Local Zones infrastructure, check the status of the machine that was created by the machine set manifests created during installation.
To check the machine sets created from the subnet you added to the install-config.yaml
file, run the following command:
$ oc get machineset -n openshift-machine-api
NAME DESIRED CURRENT READY AVAILABLE AGE
cluster-7xw5g-edge-us-east-1-nyc-1a 1 1 1 1 3h4m
cluster-7xw5g-worker-us-east-1a 1 1 1 1 3h4m
cluster-7xw5g-worker-us-east-1b 1 1 1 1 3h4m
cluster-7xw5g-worker-us-east-1c 1 1 1 1 3h4m
To check the machines that were created from the machine sets, run the following command:
$ oc get machines -n openshift-machine-api
NAME PHASE TYPE REGION ZONE AGE cluster-7xw5g-edge-us-east-1-nyc-1a-wbclh Running c5d.2xlarge us-east-1 us-east-1-nyc-1a 3h cluster-7xw5g-master-0 Running m6i.xlarge us-east-1 us-east-1a 3h4m cluster-7xw5g-master-1 Running m6i.xlarge us-east-1 us-east-1b 3h4m cluster-7xw5g-master-2 Running m6i.xlarge us-east-1 us-east-1c 3h4m cluster-7xw5g-worker-us-east-1a-rtp45 Running m6i.xlarge us-east-1 us-east-1a 3h cluster-7xw5g-worker-us-east-1b-glm7c Running m6i.xlarge us-east-1 us-east-1b 3h cluster-7xw5g-worker-us-east-1c-qfvz4 Running m6i.xlarge us-east-1 us-east-1c 3h
To check nodes with edge roles, run the following command:
$ oc get nodes -l node-role.kubernetes.io/edge
NAME STATUS ROLES AGE VERSION
ip-10-0-207-188.ec2.internal Ready edge,worker 172m v1.25.2+d2e245f
In OpenShift Container Platform 4.15, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.
After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.
For more information about the Telemetry service, see About remote health monitoring.
If necessary, you can opt out of remote health.