# subscription-manager register --username=<user_name> --password=<password>
In OpenShift Container Platform, you can add Red Hat Enterprise Linux (RHEL) compute, or worker, machines to a user-provisioned infrastructure cluster. You can use RHEL as the operating system on only compute machines.
In OpenShift Container Platform 4.1, you have the option of using Red Hat Enterprise Linux (RHEL) machines as compute machines, which are also known as worker machines, in your cluster if you use a user-provisioned infrastructure installation. You must use Red Hat Enterprise Linux CoreOS (RHCOS) machines for the control plane, or master, machines in your cluster.
As with all installations that use user-provisioned infrastructure, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks.
Because removing OpenShift Container Platform from a machine in the cluster requires destroying the operating system, you must use dedicated hardware for any RHEL machines that you add to the cluster. |
Swap memory is disabled on all RHEL machines that you add to your OpenShift Container Platform cluster. You cannot enable swap memory on these machines. |
You must add RHEL compute machines to the cluster after you initialize the control plane.
The Red Hat Enterprise Linux (RHEL) compute machine hosts, which are also known as worker machine hosts, in your OpenShift Container Platform environment must meet the following minimum hardware specifications and system-level requirements.
You must have an active OpenShift Container Platform subscription on your Red Hat account. If you do not, contact your sales representative for more information.
Production environments must provide compute machines to support your expected workloads. As an OpenShift Container Platform cluster administrator, you must calculate the expected workload and add about 10 percent for overhead. For production environments, allocate enough resources so that a node host failure does not affect your maximum capacity.
Each system must meet the following hardware requirements:
Physical or virtual system, or an instance running on a public or private IaaS.
Base OS: RHEL 7.6 with "Minimal" installation option.
Only RHEL 7.6 is supported in OpenShift Container Platform 4.1. You must not upgrade your compute machines to RHEL 8. |
NetworkManager 1.0 or later.
1 vCPU.
Minimum 8 GB RAM.
Minimum 15 GB hard disk space for the file system containing /var/
.
Minimum 1 GB hard disk space for the file system containing /usr/local/bin/
.
Minimum 1 GB hard disk space for the file system containing the system’s temporary directory. The system’s temporary directory is determined according to the rules defined in the tempfile module in Python’s standard library.
Each system must meet any additional requirements for your system provider. For
example, if you installed your cluster on VMware vSphere, your disks must
be configured according to its
storage guidelines
and the disk.enableUUID=true
attribute must be set.
Because your cluster has limited access to automatic machine management when you
use infrastructure that you provision, you must provide a mechanism for approving
cluster certificate signing requests (CSRs) after installation. The
kube-controller-manager
only approves the kubelet client CSRs. The
machine-approver
cannot guarantee the validity of a serving certificate
that is requested by using kubelet credentials because it cannot confirm that
the correct machine issued the request. You must determine and implement a
method of verifying the validity of the kubelet serving certificate requests
and approving them.
Before you can add compute machines that use Red Hat Enterprise Linux as the operating system to an OpenShift Container Platform 4.1 cluster, you must prepare a machine to run the playbook from. This machine is not part of the cluster but must be able to access it.
Install the OpenShift Command-line Interface (CLI), commonly known as oc
,
on the machine that you run the playbook on.
Log in as a user with cluster-admin
permission.
Ensure that the kubeconfig
file for the cluster and the installation program
that you used to install the cluster are on the machine. One way to accomplish
this is to use the same machine that you used to install the cluster.
Configure the machine to access all of the RHEL hosts that you plan to use as compute machines. You can use any method that your company allows, including a bastion with an SSH proxy or a VPN.
Configure a user on the machine that you run the playbook on that has SSH access to all of the RHEL hosts.
If you use SSH key-based authentication, you must manage the key with an SSH agent. |
If you have not already done so, register the machine with RHSM and attach
a pool with an OpenShift
subscription to it:
Register the machine with RHSM:
# subscription-manager register --username=<user_name> --password=<password>
Pull the latest subscription data from RHSM:
# subscription-manager refresh
List the available subscriptions:
# subscription-manager list --available --matches '*OpenShift*'
In the output for the previous command, find the pool ID for an OpenShift Container Platform subscription and attach it:
# subscription-manager attach --pool=<pool_id>
Enable the repositories required by OpenShift Container Platform 4.1:
# subscription-manager repos \ --enable="rhel-7-server-rpms" \ --enable="rhel-7-server-extras-rpms" \ --enable="rhel-7-server-ansible-2.7-rpms" \ --enable="rhel-7-server-ose-4.1-rpms"
Install the required packages, including Openshift-Ansible
:
# yum install openshift-ansible openshift-clients jq
The openshift-ansible
package provides installation program utilities and
pulls in other
packages that you require to add a RHEL compute node to your cluster, such as
Ansible, playbooks, and related configuration files. The openshift-clients
provides the oc
CLI, and the jq
package improves the display of JSON output
on your command line.
Before you add a Red Hat Enterprise Linux (RHEL) machine to your OpenShift Container Platform cluster, you must register each host with Red Hat Subscription Manager (RHSM), attach an active OpenShift Container Platform subscription, and enable the required repositories.
On each host, register with RHSM:
# subscription-manager register --username=<user_name> --password=<password>
Pull the latest subscription data from RHSM:
# subscription-manager refresh
List the available subscriptions:
# subscription-manager list --available --matches '*OpenShift*'
In the output for the previous command, find the pool ID for an OpenShift Container Platform subscription and attach it:
# subscription-manager attach --pool=<pool_id>
Disable all yum repositories:
Disable all the enabled RHSM repositories:
# subscription-manager repos --disable="*"
List the remaining yum repositories and note their names under repo id
, if any:
# yum repolist
Use yum-config-manager
to disable the remaining yum repositories:
# yum-config-manager --disable <repo_id>
Alternatively, disable all repositories:
yum-config-manager --disable \*
Note that this might take a few minutes if you have a large number of available repositories
Enable only the repositories required by OpenShift Container Platform 4.1:
# subscription-manager repos \ --enable="rhel-7-server-rpms" \ --enable="rhel-7-server-extras-rpms" \ --enable="rhel-7-server-ose-4.1-rpms"
Stop and disable firewalld on the host:
# systemctl disable --now firewalld.service
You must not enable firewalld later. If you do, you cannot access OpenShift Container Platform logs on the worker. |
You can add compute machines that use Red Hat Enterprise Linux as the operating system to an OpenShift Container Platform 4.1 cluster.
You installed the required packages and performed the necessary configuration on the machine that you run the playbook on.
You prepared the RHEL hosts for installation.
Perform the following steps on the machine that you prepared to run the playbook:
Extract the pull secret for your cluster:
$ oc -n openshift-config get -o jsonpath='{.data.\.dockerconfigjson}' secret pull-secret | base64 -d | jq .
Save the pull secret in a file that is named pull-secret.txt
.
Create an Ansible inventory file that is named /<path>/inventory/hosts
that
defines your compute machine hosts and required variables:
[all:vars] ansible_user=root (1) #ansible_become=True (2) openshift_kubeconfig_path="~/.kube/config" (3) openshift_pull_secret_path="~/pull-secret.txt" (4) [new_workers] (5) mycluster-worker-0.example.com mycluster-worker-1.example.com
1 | Specify the user name that runs the Ansible tasks on the remote compute machines. |
2 | If you do not specify root for the ansible_user , you must set ansible_become
to True and assign the user sudo permissions. |
3 | Specify the path to the kubeconfig file for your cluster. |
4 | Specify the path to the file that contains the pull secret for the image registry for your cluster. |
5 | List each RHEL machine to add to your cluster. You must provide the fully-qualified domain name for each host. This name is the host name that the cluster uses to access the machine, so set the correct public or private name to access the machine. |
Run the playbook:
$ cd /usr/share/ansible/openshift-ansible $ ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml (1)
1 | For <path> , specify the path to the Ansible inventory file
that you created. |
When you add machines to a cluster, two pending certificates signing request (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself.
You added machines to your cluster.
Confirm that the cluster recognizes the machines:
$ oc get nodes NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.13.4+b626c2fe1 master-1 Ready master 63m v1.13.4+b626c2fe1 master-2 Ready master 64m v1.13.4+b626c2fe1 worker-0 NotReady worker 76s v1.13.4+b626c2fe1 worker-1 NotReady worker 70s v1.13.4+b626c2fe1
The output lists all of the machines that you created.
Review the pending certificate signing requests (CSRs) and ensure that the
you see a client and server request with Pending
or Approved
status for
each machine that you added to the cluster:
$ oc get csr NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending (1) csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending (2) csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ...
1 | A client request CSR. |
2 | A server request CSR. |
In this example, two machines are joining the cluster. You might see more approved CSRs in the list.
If the CSRs were not approved, after all of the pending CSRs for the machines
you added are in Pending
status, approve the CSRs for your cluster machines:
Because the CSRs rotate automatically, approve your CSRs within an hour
of adding the machines to the cluster. If you do not approve them within an
hour, the certificates will rotate, and more than two certificates will be
present for each node. You must approve all of these certificates. After you
approve the initial CSRs, the subsequent node client CSRs are automatically
approved by the cluster |
To approve them individually, run the following command for each valid CSR:
$ oc adm certificate approve <csr_name> (1)
1 | <csr_name> is the name of a CSR from the list of current CSRs. |
To approve all pending CSRs, run the following command:
$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
You must define the following parameters in the Ansible hosts file before you add Red Hat Enterprise Linux (RHEL) compute machines to your cluster.
Paramter | Description | Values |
---|---|---|
|
The SSH user that allows SSH-based authentication without requiring a password. If you use SSH key-based authentication, then you must manage the key with an SSH agent. |
A user name on the system. The default value is |
|
If the values of |
|
|
Specifies a path to a local directory that contains the |
The path and name of the configuration file. |
|
Specifies a path to the text file that contains the pull secret to the image registry for your cluster. Use the pull secret that you obtained from the OpenShift Infrastructure Providers page. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. |
The path and name of the pull secret file. |
After you add the Red Hat Enterprise Linux (RHEL) compute machines to your cluster, you can remove the Red Hat Enterprise Linux CoreOS (RHCOS) compute machines.
You have added RHEL compute machines to your cluster.
View the list of machines and record the node names of the RHCOS compute machines:
$ oc get nodes -o wide
For each RHCOS compute machine, delete the node:
Mark the node as unschedulable by running the oc adm cordon
command:
$ oc adm cordon <node_name> (1)
1 | Specify the node name of one of the RHCOS compute machines. |
Drain all the pods from the node:
$ oc adm drain <node_name> --force --delete-local-data --ignore-daemonsets (1)
1 | Specify the node name of the RHCOS compute machine that you isolated. |
Delete the node:
$ oc delete nodes <node_name> (1)
1 | Specify the node name of the RHCOS compute machine that you drained. |
Review the list of compute machines to ensure that only the RHEL nodes remain:
$ oc get nodes -o wide
Remove the RHCOS machines from the load balancer for your cluster’s compute machines. You can delete the Virtual Machines or reimage the physical hardware for the RHCOS compute machines.