×

Create a Red Hat OpenShift Service on AWS (ROSA) (classic architecture) cluster quickly by using a Terraform cluster template that is configured with the default cluster options.

The cluster creation process described below uses a Terraform configuration that prepares a ROSA (classic architecture) AWS Security Token Service (STS) cluster with the following resources:

  • An OIDC provider with a managed oidc-config configuration

  • Prerequisite IAM Operator roles with associated AWS Managed ROSA Policies

  • IAM account roles with associated AWS Managed ROSA Policies

  • All other AWS resources required to create a ROSA with STS cluster

Overview of Terraform

Terraform is an infrastructure-as-code tool that provides a way to configure your resources once and replicate those resources as desired. Terraform accomplishes the creation tasks by using declarative language. You declare what you want the final state of the infrastructure resource to be, and Terraform creates these resources to your specifications.

Prerequisites

To use the Red Hat Cloud Services provider inside your Terraform configuration, you must meet the following prerequisites:

  • You have installed the Red Hat OpenShift Service on AWS (ROSA) command-line interface (CLI) tool.

  • You have your offline Red Hat OpenShift Cluster Manager token.

  • You have installed Terraform version 1.4.6 or newer.

  • You have created your AWS account-wide IAM roles.

    The specific account-wide IAM roles and policies provide the STS permissions required for ROSA support, installation, control plane, and compute functionality. This includes account-wide Operator policies. See the Additional resources for more information on the AWS account roles.

  • You have an AWS account and associated credentials that allow you to create resources. The credentials are configured for the AWS provider. See the Authentication and Configuration section in AWS Terraform provider documentation.

  • You have, at minimum, the following permissions in your AWS IAM role policy that is operating Terraform. Check for these permissions in the AWS console.

    Minimum AWS permissions for Terraform
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "VisualEditor0",
          "Effect": "Allow",
          "Action": [
            "iam:GetPolicyVersion",
            "iam:DeletePolicyVersion",
            "iam:CreatePolicyVersion",
            "iam:UpdateAssumeRolePolicy",
            "secretsmanager:DescribeSecret",
            "iam:ListRoleTags",
            "secretsmanager:PutSecretValue",
            "secretsmanager:CreateSecret",
            "iam:TagRole",
            "secretsmanager:DeleteSecret",
            "iam:UpdateOpenIDConnectProviderThumbprint",
            "iam:DeletePolicy",
            "iam:CreateRole",
            "iam:AttachRolePolicy",
            "iam:ListInstanceProfilesForRole",
            "secretsmanager:GetSecretValue",
            "iam:DetachRolePolicy",
            "iam:ListAttachedRolePolicies",
            "iam:ListPolicyTags",
            "iam:ListRolePolicies",
            "iam:DeleteOpenIDConnectProvider",
            "iam:DeleteInstanceProfile",
            "iam:GetRole",
            "iam:GetPolicy",
            "iam:ListEntitiesForPolicy",
            "iam:DeleteRole",
            "iam:TagPolicy",
            "iam:CreateOpenIDConnectProvider",
            "iam:CreatePolicy",
            "secretsmanager:GetResourcePolicy",
            "iam:ListPolicyVersions",
            "iam:UpdateRole",
            "iam:GetOpenIDConnectProvider",
            "iam:TagOpenIDConnectProvider",
            "secretsmanager:TagResource",
            "sts:AssumeRoleWithWebIdentity",
            "iam:ListRoles"
          ],
          "Resource": [
            "arn:aws:secretsmanager:*:<ACCOUNT_ID>:secret:*",
            "arn:aws:iam::<ACCOUNT_ID>:instance-profile/*",
            "arn:aws:iam::<ACCOUNT_ID>:role/*",
            "arn:aws:iam::<ACCOUNT_ID>:oidc-provider/*",
            "arn:aws:iam::<ACCOUNT_ID>:policy/*"
          ]
        },
        {
          "Sid": "VisualEditor1",
          "Effect": "Allow",
          "Action": [
            "s3:*"
            ],
          "Resource": "*"
        }
      ]
    }

Considerations when using Terraform

In general, using Terraform to manage cloud resources should be done with the expectation that any changes should be done using the Terraform methodology. Use caution when using tools outside of Terraform, such as the AWS console or Red Hat console, to modify cloud resources created by Terraform. Using tools outside Terraform to manage cloud resources that are already managed by Terraform introduces configuration drift from your declared Terraform configuration.

For example, if you upgrade your Terraform-created cluster by using the Red Hat Hybrid Cloud Console, you need to reconcile your Terraform state before applying any forthcoming configuration changes. For more information, see Manage resources in Terraform state in the HashiCorp Developer documentation.

Overview of the default cluster specifications

Component Default specifications

Accounts and roles

  • Default IAM role prefix: rosa-<6-digit-alphanumeric-string>

  • No cluster admin role created

Cluster settings

  • Default cluster version: 4.14

  • Cluster name: rosa-<6-digit-alphanumeric-string>

  • Default AWS region for installations using the Red Hat OpenShift Cluster Manager Hybrid Cloud Console: us-east-2 (US East, Ohio)

  • Availability: Multi zone for the data plane

  • Default EC2 IMDS endpoints (both v1 and v2) are enabled

  • Monitoring for user-defined projects: Enabled

Encryption

  • Cloud storage is encrypted at rest

  • Additional etcd encryption is not enabled

  • The default AWS Key Management Service (KMS) key is used as the encryption key for persistent data

Control plane node configuration

  • Control plane node instance type: m5.2xlarge (8 vCPU, 32 GiB RAM)

  • Control plane node count: 3

Infrastructure node configuration

  • Infrastructure node instance type: r5.xlarge (4 vCPU, 32 GiB RAM)

  • Infrastructure node count: 2

Compute node machine pool

  • Compute node instance type: m5.xlarge (4 vCPU 16, GiB RAM)

  • Compute node count: 3

  • Autoscaling: Not enabled

  • No additional node labels

Networking configuration

  • Cluster privacy: public or private

  • You can choose to create a new VPC during the Terraform cluster creation process.

  • No cluster-wide proxy is configured

Classless Inter-Domain Routing (CIDR) ranges

  • Machine CIDR: 10.0.0.0/16

  • Service CIDR: 172.30.0.0/16

  • Pod CIDR: 10.128.0.0/14

  • Host prefix: /23

Cluster roles and policies

  • Mode used to create the Operator roles and the OpenID Connect (OIDC) provider: auto

    For installations that use OpenShift Cluster Manager on the Hybrid Cloud Console, the auto mode requires an admin-privileged OpenShift Cluster Manager role.

  • Default Operator role prefix: rosa-<6-digit-alphanumeric-string>

Cluster update strategy

  • Individual updates

  • 1 hour grace period for node draining

Creating a default ROSA (classic architecture) cluster using Terraform

The cluster creation process outlined below shows how to use Terraform to create your account-wide IAM roles and a ROSA (classic architecture) cluster with a managed OIDC configuration.

Preparing your environment for Terraform

Before you can create your Red Hat OpenShift Service on AWS cluster by using Terraform, you need to export your offline Red Hat OpenShift Cluster Manager token.

Procedure
  1. Optional: Because the Terraform files get created in your current directory during this procedure, you can create a new directory to store these files and navigate into it by running the following command:

    $ mkdir terraform-cluster && cd terraform-cluster
  2. Grant permissions to your account by using an offline Red Hat OpenShift Cluster Manager token.

  3. Copy your offline token, and set the token as an environmental variable by running the following command:

    $ export RHCS_TOKEN=<your_offline_token>

    This environmental variable resets at the end of each session, such as restarting your machine or closing the terminal.

Verification
  • After you export your token, verify the value by running the following command:

    $ echo $RHCS_TOKEN

Creating your Terraform files locally

After you set up your offline Red Hat OpenShift Cluster Manager token, you need to create the Terraform files locally to build your cluster. You can create these files by using the following code templates.

Procedure
  1. Create the main.tf file by running the following command:

    $ cat<<-EOF>main.tf
    #
    # Copyright (c) 2023 Red Hat, Inc.
    #
    # Licensed under the Apache License, Version 2.0 (the "License");
    # you may not use this file except in compliance with the License.
    # You may obtain a copy of the License at
    #
    #   http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    #
    
    terraform {
      required_providers {
        aws = {
          source  = "hashicorp/aws"
          version = ">= 4.20.0"
        }
        rhcs = {
          version = ">= 1.6.2"
          source  = "terraform-redhat/rhcs"
        }
      }
    }
    
    # Export token using the RHCS_TOKEN environment variable
    provider "rhcs" {}
    
    provider "aws" {
      region = var.aws_region
      ignore_tags {
        key_prefixes = ["kubernetes.io/"]
      }
      default_tags {
        tags = var.default_aws_tags
      }
    }
    
    data "aws_availability_zones" "available" {}
    
    locals {
      # The default setting creates 3 availability zones. Set to "false" to create a single availability zones.
      region_azs = var.multi_az ? slice([for zone in data.aws_availability_zones.available.names : format("%s", zone)], 0, 3) : slice([for zone in data.aws_availability_zones.available.names : format("%s", zone)], 0, 1)
    }
    
    resource "random_string" "random_name" {
      length  = 6
      special = false
      upper   = false
    }
    
    locals {
      path                 = coalesce(var.path, "/")
      worker_node_replicas = try(var.worker_node_replicas, var.multi_az ? 3 : 2)
      # If cluster_name is not null, use that, otherwise generate a random cluster name
      cluster_name = coalesce(var.cluster_name, "rosa-\${random_string.random_name.result}")
    }
    
    # The network validator requires an additional 60 seconds to validate Terraform clusters.
    resource "time_sleep" "wait_60_seconds" {
      count = var.create_vpc ? 1 : 0
      depends_on = [module.vpc]
      create_duration = "60s"
    }
    
    module "rosa-classic" {
      source                 = "terraform-redhat/rosa-classic/rhcs"
      version                = "1.5.0"
      cluster_name           = local.cluster_name
      openshift_version      = var.openshift_version
      account_role_prefix    = local.cluster_name
      operator_role_prefix   = local.cluster_name
      replicas               = local.worker_node_replicas
      aws_availability_zones = local.region_azs
      create_oidc            = true
      private                = var.private_cluster
      aws_private_link       = var.private_cluster
      aws_subnet_ids         = var.create_vpc ? var.private_cluster ? module.vpc[0].private_subnets : concat(module.vpc[0].public_subnets, module.vpc[0].private_subnets) : var.aws_subnet_ids
      multi_az               = var.multi_az
      create_account_roles   = true
      create_operator_roles  = true
    
      depends_on = [time_sleep.wait_60_seconds]
    }
    EOF
  2. Create the variables.tf file by running the following command:

    Copy and edit this file before running the command to build your cluster.

    $ cat<<-EOF>variables.tf
    #
    # Copyright (c) 2023 Red Hat, Inc.
    #
    # Licensed under the Apache License, Version 2.0 (the "License");
    # you may not use this file except in compliance with the License.
    # You may obtain a copy of the License at
    #
    #   http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    #
    variable "openshift_version" {
      type        = string
      default     = "4.14.20"
      description = "Desired version of OpenShift for the cluster, for example '4.14.20'. If version is greater than the currently running version, an upgrade will be scheduled."
    }
    
    variable "create_vpc" {
      type        = bool
      description = "If you would like to create a new VPC, set this value to 'true'. If you do not want to create a new VPC, set this value to 'false'."
    }
    
    # ROSA Cluster info
    variable "cluster_name" {
      default     = null
      type        = string
      description = "The name of the ROSA cluster to create"
    }
    
    variable "additional_tags" {
      default = {
        Terraform   = "true"
        Environment = "dev"
      }
      description = "Additional AWS resource tags"
      type        = map(string)
    }
    
    variable "path" {
      description = "(Optional) The arn path for the account/operator roles as well as their policies."
      type        = string
      default     = null
    }
    
    variable "multi_az" {
      type        = bool
      description = "Multi AZ Cluster for High Availability"
      default     = true
    }
    
    variable "worker_node_replicas" {
      default     = 3
      description = "Number of worker nodes to provision. Single zone clusters need at least 2 nodes, multizone clusters need at least 3 nodes"
      type        = number
    }
    
    variable "aws_subnet_ids" {
      type        = list(any)
      description = "A list of either the public or public + private subnet IDs to use for the cluster blocks to use for the cluster"
      default     = ["subnet-01234567890abcdef", "subnet-01234567890abcdef", "subnet-01234567890abcdef"]
    }
    
    variable "private_cluster" {
      type        = bool
      description = "If you want to create a private cluster, set this value to 'true'. If you want a publicly available cluster, set this value to 'false'."
    }
    
    #VPC Info
    variable "vpc_name" {
      type        = string
      description = "VPC Name"
      default     = "tf-qs-vpc"
    }
    
    variable "vpc_cidr_block" {
      type        = string
      description = "value of the CIDR block to use for the VPC"
      default     = "10.0.0.0/16"
    }
    
    variable "private_subnet_cidrs" {
      type        = list(any)
      description = "The CIDR blocks to use for the private subnets"
      default     = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
    }
    
    variable "public_subnet_cidrs" {
      type        = list(any)
      description = "The CIDR blocks to use for the public subnets"
      default     = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]
    }
    
    variable "single_nat_gateway" {
      type        = bool
      description = "Single NAT or per NAT for subnet"
      default     = false
    }
    
    #AWS Info
    variable "aws_region" {
      type    = string
      default = "us-east-2"
    }
    
    variable "default_aws_tags" {
      type        = map(string)
      description = "Default tags for AWS"
      default     = {}
    }
    EOF
  3. Create the vpc.tf file by running the following command:

    $ cat<<-EOF>vpc.tf
    #
    # Copyright (c) 2023 Red Hat, Inc.
    #
    # Licensed under the Apache License, Version 2.0 (the "License");
    # you may not use this file except in compliance with the License.
    # You may obtain a copy of the License at
    #
    #   http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    #
    module "vpc" {
      source  = "terraform-aws-modules/vpc/aws"
      version = "5.1.2"
    
      count = var.create_vpc ? 1 : 0
      name  = var.vpc_name
      cidr  = var.vpc_cidr_block
    
      azs             = local.region_azs
      private_subnets = var.private_subnet_cidrs
      public_subnets  = var.public_subnet_cidrs
    
      enable_nat_gateway   = true
      single_nat_gateway   = var.single_nat_gateway
      enable_dns_hostnames = true
      enable_dns_support   = true
    
      tags = var.additional_tags
    }
    EOF

    You are ready to initiate Terraform.

Using Terraform to create your ROSA cluster

After you create the Terraform files, you must initiate Terraform to provide all of the required dependencies. Then apply the Terraform plan.

Do not modify Terraform state files. For more information, see Considerations when using Terraform

Procedure
  1. Set up Terraform to create your resources based on your Terraform files, run the following command:

    $ terraform init
  2. Optional: Verify that the Terraform you copied is correct by running the following command:

    $ terraform validate
    Example output
    Success! The configuration is valid.
  3. Create your cluster with Terraform by running the following command:

    $ terraform apply

    The Terraform interface asks two questions to create your cluster, similiar to the following:

    Example output
    var.create_vpc
      If you would like to create a new VPC, set this value to 'true'. If you do not want to create a new VPC, set this value to 'false'.
    
      Enter a value:
    
    var.private_cluster
      If you want to create a private cluster, set this value to 'true'. If you want a publicly available cluster, set this value to 'false'.
    
      Enter a value:
  4. Enter yes to proceed or no to cancel when the Terraform interface lists the resources to be created or changed and prompts for confirmation:

    Example output
    Plan: 74 to add, 0 to change, 0 to destroy.
    
    Do you want to perform these actions?
      Terraform will perform the actions described above.
      Only 'yes' will be accepted to approve.
    
      Enter a value: yes

    If you enter yes, your Terraform plan starts, creating your AWS account roles, Operator roles, and your ROSA Classic cluster.

Verification
  1. Verify that your cluster was created by running the following command:

    $ rosa list clusters
    Example output showing a cluster’s ID, name, and status:
    ID                                NAME          STATE  TOPOLOGY
    27c3snjsupa9obua74ba8se5kcj11269  rosa-tf-demo  ready  Classic (STS)
  2. Verify that your account roles were created by running the following command:

    $ rosa list account-roles
    Example output
    I: Fetching account roles
    ROLE NAME                                   ROLE TYPE      ROLE ARN                                                           OPENSHIFT VERSION  AWS Managed
    ROSA-demo-ControlPlane-Role                 Control plane  arn:aws:iam::<ID>:role/ROSA-demo-ControlPlane-Role                 4.14               No
    ROSA-demo-Installer-Role                    Installer      arn:aws:iam::<ID>:role/ROSA-demo-Installer-Role                    4.14               No
    ROSA-demo-Support-Role                      Support        arn:aws:iam::<ID>:role/ROSA-demo-Support-Role                      4.14               No
    ROSA-demo-Worker-Role                       Worker         arn:aws:iam::<ID>:role/ROSA-demo-Worker-Role                       4.14               No
  3. Verify that your Operator roles were created by running the following command:

    $ rosa list operator-roles
    Example output showing Terraform-created Operator roles:
    I: Fetching operator roles
    ROLE PREFIX    AMOUNT IN BUNDLE
    rosa-demo      6

Deleting your ROSA cluster with Terraform

Use the terraform destroy command to remove all of the resources that were created with the terraform apply command.

Do not modify your Terraform .tf files before destroying your resources. These variables are matched to resources to delete.

Procedure
  1. In the directory where you ran the terraform apply command to create your cluster, run the following command to delete the cluster:

    $ terraform destroy

    The Terraform interface prompts you for two variables. These should match the answers you provided when creating a cluster:

    var.create_vpc
      If you would like to create a new VPC, set this value to 'true.' If you do not want to create a new VPC, set this value to 'false.'
    
      Enter a value:
    
    var.private_cluster
      If you want to create a private cluster, set this value to 'true.' If you want a publicly available cluster, set this value to 'false.'
    
      Enter a value:
  2. Enter yes to start the role and cluster deletion:

    Example output
    Plan: 0 to add, 0 to change, 74 to destroy.
    
    Do you really want to destroy all resources?
      Terraform will destroy all your managed infrastructure, as shown above.
      There is no undo. Only 'yes' will be accepted to confirm.
    
      Enter a value: yes
Verification
  1. Verify that your cluster was destroyed by running the following command:

    $ rosa list clusters
    Example output showing no cluster
    I: No clusters available
  2. Verify that the account roles were destroyed by running the following command:

    $ rosa list account-roles
    Example output showing no Terraform-created account roles:
    I: Fetching account roles
    I: No account roles available
  3. Verify that the Operator roles were destroyed by running the following command:

    $ rosa list operator-roles
    Example output showing no Terraform-created Operator roles:
    I: Fetching operator roles
    I: No operator roles available