A node provides the runtime environments for containers. Each node in a
Kubernetes cluster has the required services to be managed by the master. Nodes
also have the required services to run pods, including the container runtime, a
kubelet, and a service proxy.
OpenShift Container Platform creates nodes from a cloud provider, physical systems, or virtual
systems. Kubernetes interacts with node objects
that are a representation of those nodes. The master uses the information from
node objects to validate nodes with health checks. A node is ignored until it
passes the health checks, and the master continues checking nodes until they are
valid. The Kubernetes documentation
has more information on node statuses and management.
|
See the
cluster
limits section for the recommended maximum number of nodes.
|
Kubelet
Each node has a kubelet that updates the node as specified by a container
manifest, which is a YAML file that describes a pod. The kubelet uses a set of
manifests to ensure that its containers are started and that they continue to
run.
A container manifest can be provided to a kubelet by:
-
A file path on the command line that is checked every 20 seconds.
-
An HTTP endpoint passed on the command line that is checked every 20 seconds.
-
The kubelet watching an etcd server, such as /registry/hosts/$(hostname -f), and acting on any changes.
-
The kubelet listening for HTTP and responding to a simple API to submit a new
manifest.
Service Proxy
Each node also runs a simple network proxy that reflects the services defined in
the API on that node. This allows the node to do simple TCP and UDP stream
forwarding across a set of back ends.
Node Object Definition
The following is an example node object definition in Kubernetes:
apiVersion: v1 (1)
kind: Node (2)
metadata:
creationTimestamp: null
labels: (3)
kubernetes.io/hostname: node1.example.com
name: node1.example.com (4)
spec:
externalID: node1.example.com (5)
status:
nodeInfo:
bootID: ""
containerRuntimeVersion: ""
kernelVersion: ""
kubeProxyVersion: ""
kubeletVersion: ""
machineID: ""
osImage: ""
systemUUID: ""
1 |
apiVersion defines the API version to use. |
2 |
kind set to Node identifies this as a definition for a node
object. |
3 |
metadata.labels lists any
labels that have been added
to the node. |
4 |
metadata.name is a required value that defines the name of the node
object. This value is shown in the NAME column when running the oc get nodes
command. |
5 |
spec.externalID defines the fully-qualified domain name where the node
can be reached. Defaults to the metadata.name value when empty. |
Node Bootstrapping
Starting in OpenShift Container Platform 3.10, a node’s configuration is bootstrapped from
the master, which means nodes pull their pre-defined configuration and client
and server certificates from the master. This allows faster node start-up by
reducing the differences between nodes, as well as centralizing more
configuration and letting the cluster converge on the desired state. Certificate
rotation and centralized certificate management are enabled by default.
Figure 2. Node bootstrapping workflow overview
When node services are started, the node checks if the
/etc/origin/node/node.kubeconfig file and other node configuration files
exist before joining the cluster. If they do not, the node pulls the
configuration from the master, then joins the cluster.
ConfigMaps are used
to store the node configuration in the cluster, which populates the
configuration file on the node host at /etc/origin/node/node-config.yaml.
For definitions of the set of default node groups and their ConfigMaps, see
Defining Node Groups and Host Mappings
in Installing Clusters.
Node Bootstrap Workflow
The process for automatic node bootstrapping uses the following workflow:
-
By default during cluster installation, a set of clusterrole
,
clusterrolebinding
and serviceaccount
objects are created for use in node
bootstrapping:
-
The system:node-bootstrapper cluster role is used for creating certificate signing requests (CSRs) during node bootstrapping:
# oc describe clusterrole.authorization.openshift.io/system:node-bootstrapper
Name: system:node-bootstrapper
Created: 17 hours ago
Labels: kubernetes.io/bootstrapping=rbac-defaults
Annotations: authorization.openshift.io/system-only=true
openshift.io/reconcile-protect=false
Verbs Non-Resource URLs Resource Names API Groups Resources
[create get list watch] [] [] [certificates.k8s.io] [certificatesigningrequests]
-
The following node-bootstrapper service account is created in the
openshift-infra project:
# oc describe sa node-bootstrapper -n openshift-infra
Name: node-bootstrapper
Namespace: openshift-infra
Labels: <none>
Annotations: <none>
Image pull secrets: node-bootstrapper-dockercfg-f2n8r
Mountable secrets: node-bootstrapper-token-79htp
node-bootstrapper-dockercfg-f2n8r
Tokens: node-bootstrapper-token-79htp
node-bootstrapper-token-mqn2q
Events: <none>
-
The following system:node-bootstrapper cluster role binding is for the node
bootstrapper cluster role and service account:
# oc describe clusterrolebindings system:node-bootstrapper
Name: system:node-bootstrapper
Created: 17 hours ago
Labels: <none>
Annotations: openshift.io/reconcile-protect=false
Role: /system:node-bootstrapper
Users: <none>
Groups: <none>
ServiceAccounts: openshift-infra/node-bootstrapper
Subjects: <none>
Verbs Non-Resource URLs Resource Names API Groups Resources
[create get list watch] [] [] [certificates.k8s.io] [certificatesigningrequests]
-
Also by default during cluster installation, the openshift-ansible installer creates a
OpenShift Container Platform certificate authority and various other certificates, keys, and
kubeconfig files in the /etc/origin/master directory. Two files of note
are:
/etc/origin/master/admin.kubeconfig
|
Uses the system:admin user.
|
/etc/origin/master/bootstrap.kubeconfig
|
Used for node bootstrapping nodes other than masters.
|
-
The /etc/origin/master/bootstrap.kubeconfig is created when the installer
uses the node-bootstrapper service account as follows:
$ oc --config=/etc/origin/master/admin.kubeconfig \
serviceaccounts create-kubeconfig node-bootstrapper \
-n openshift-infra
-
On master nodes, the /etc/origin/master/admin.kubeconfig is used as a
bootstrapping file and is copied to /etc/origin/node/boostrap.kubeconfig. On
other, non-master nodes, the /etc/origin/master/bootstrap.kubeconfig file is
copied to all other nodes in at /etc/origin/node/boostrap.kubeconfig on each
node host.
-
The /etc/origin/master/bootstrap.kubeconfig is then passed to kubelet using
the flag --bootstrap-kubeconfig
as follows:
--bootstrap-kubeconfig=/etc/origin/node/bootstrap.kubeconfig
-
The kubelet is first started with the supplied
/etc/origin/node/bootstrap.kubeconfig file. After initial connection
internally, the kubelet creates certificate signing requests (CSRs) and sends
them to the master.
-
The CSRs are verified and approved via the controller manager (specifically the
certificate signing controller). If approved, the kubelet client and server
certificates are created in the /etc/origin/node/ceritificates directory.
For example:
# ls -al /etc/origin/node/certificates/
total 12
drwxr-xr-x. 2 root root 212 Jun 18 21:56 .
drwx------. 4 root root 213 Jun 19 15:18 ..
-rw-------. 1 root root 2826 Jun 18 21:53 kubelet-client-2018-06-18-21-53-15.pem
-rw-------. 1 root root 1167 Jun 18 21:53 kubelet-client-2018-06-18-21-53-45.pem
lrwxrwxrwx. 1 root root 68 Jun 18 21:53 kubelet-client-current.pem -> /etc/origin/node/certificates/kubelet-client-2018-06-18-21-53-45.pem
-rw-------. 1 root root 1447 Jun 18 21:56 kubelet-server-2018-06-18-21-56-52.pem
lrwxrwxrwx. 1 root root 68 Jun 18 21:56 kubelet-server-current.pem -> /etc/origin/node/certificates/kubelet-server-2018-06-18-21-56-52.pem
-
After the CSR approval, the node.kubeconfig file is created at
/etc/origin/node/node.kubeconfig.
-
The kubelet is restarted with the /etc/origin/node/node.kubeconfig file and
the certificates in the /etc/origin/node/certificates/ directory, after
which point it is ready to join the cluster.
Node Configuration Workflow
Sourcing a node’s configuration uses the following workflow:
-
Initially the node’s kubelet is started with the bootstrap configuration file,
bootstrap-node-config.yaml in the /etc/origin/node/ directory, created
at the time of node provisioning.
-
On each node, the node service file uses the local script
openshift-node in the /usr/local/bin/ directory to start the kubelet
with the supplied bootstrap-node-config.yaml.
-
On each master, the directory /etc/origin/node/pods contains pod manifests
for apiserver, controller and etcd which are created as static pods on
masters.
-
During cluster installation, a sync DaemonSet is created which creates a sync
pod on each node. The sync pod monitors changes in the file
/etc/sysconfig/atomic-openshift-node. It specifically watches for
BOOTSTRAP_CONFIG_NAME
to be set. BOOTSTRAP_CONFIG_NAME
is set by the
openshift-ansible installer and is the name of the ConfigMap based on the node
configuration group the node belongs to.
By default, the installer creates the following node configuration groups:
-
node-config-master
-
node-config-infra
-
node-config-compute
-
node-config-all-in-one
-
node-config-master-infra
A ConfigMap for each group is created in the openshift-node project.
-
The sync pod extracts the appropriate ConfigMap based on the value set in
BOOTSTRAP_CONFIG_NAME
.
-
The sync pod converts the ConfigMap data into kubelet configurations and creates
a /etc/origin/node/node-config.yaml for that node host. If a change is made
to this file (or it is the file’s initial creation), the kubelet is restarted.
Modifying Node Configurations
A node’s configuration is modified by editing the appropriate ConfigMap in the
openshift-node project. The /etc/origin/node/node-config.yaml must not be
modified directly.
For example, for a node that is in the node-config-compute group, edit the
ConfigMap using:
$ oc edit cm node-config-compute -n openshift-node