$ timedatectl Local time: Thu 2017-12-21 14:58:34 UTC Universal time: Thu 2017-12-21 14:58:34 UTC RTC time: Thu 2017-12-21 14:58:34 Time zone: Etc/UTC (UTC, +0000) NTP enabled: yes NTP synchronized: yes RTC in local TZ: no DST active: n/a
After installing OpenShift Container Platform, your system might need extra configuration to ensure your hosts consistently run smoothly.
While these are classified as run-once tasks, you can perform any of these at any time if any circumstances change.
NTP (Network Time Protocol) is for keeping hosts in sync with the world clock. Time synchronization is important for time sensitive operations, such as log keeping and time stamps, and is highly recommended for Kubernetes, which OpenShift Container Platform is built on. OpenShift Container Platform operations include etcd leader election, health checks for pods and some other issues, and helps prevent time skew problems.
The OpenShift Container Platform installation playbooks install, enable, and configure the
|
Depending on your instance, NTP might not be enabled by default. To verify that a host is configured to use NTP:
$ timedatectl Local time: Thu 2017-12-21 14:58:34 UTC Universal time: Thu 2017-12-21 14:58:34 UTC RTC time: Thu 2017-12-21 14:58:34 Time zone: Etc/UTC (UTC, +0000) NTP enabled: yes NTP synchronized: yes RTC in local TZ: no DST active: n/a
If both NTP enabled
and NTP synchronized
are yes
, then NTP synchronization
is active.
If no
, install and enable the ntp
or chrony
RPM package.
To install the ntp
package, run the following command:
# timedatectl set-ntp true
To install the chrony
package, run the following commands:
# yum install chrony # systemctl enable chronyd --now
Time synchronization should be enabled on all hosts in the cluster, whether using NTP or any other method. |
For more information about the timedatectl
command, timezones, and clock
configuration, see
Configuring
the date and time and
UTC,
Timezones, and DST.
OpenShift Container Platform uses entropy to generate random numbers for objects such as IDs or SSL traffic. These operations wait until there is enough entropy to complete the task. Without enough entropy, the kernel is not able to generate these random numbers with sufficient speed, which can lead to timeouts and the refusal of secure connections.
To check available entropy:
$ cat /proc/sys/kernel/random/entropy_avail 2683
The available entropy should be verified on all node hosts in the cluster.
Ideally, this value should be above 1000
.
Red Hat recommends monitoring this value and issuing an alert if the value is
under |
Alternatively, you can use the rngtest
command to check not only the available
entropy, but if your system can feed enough entropy as well:
$ cat /dev/random | rngtest -c 100
The rngtest
command is available from the rng-tools
If the above takes around 30 seconds to complete, then there is not enough entropy available.
Depending on your environment, entropy can be increased in multiple ways. For more information, see the following blog post: https://developers.redhat.com/blog/2017/10/05/entropy-rhel-based-cloud-instances/.
Generally, you can increase entropy by installing the rng-tools
package and
enabling the rngd
service:
# yum install rng-tools # systemctl enable --now rngd
Once the rngd
service has started, entropy should increase to a sufficient
level.
For proper functionality of dynamically provisioned persistent storage, the default storage class needs to be defined. During the installation, this default storage class is defined for common cloud providers, such as Amazon Web Services (AWS), Google Cloud Platform (GCP), and more.
To verify that the default storage class is defined:
$ oc get storageclass NAME TYPE ssd kubernetes.io/gce-pd standard (default) kubernetes.io/gce-pd
The above output is taken from an OpenShift Container Platform instance running on GCP, where two kinds of persistent storage are available: standard (HDD) and SSD. Notice the standard storage class is configured as the default. If there is no storage class defined, or none is set as a default, see the Dynamic Provisioning and Creating Storage Classes section for instructions on how to set up a storage class as suggested.