The master is the host or hosts that contain the control plane components,
including the API server, controller manager server, and etcd. The master
manages nodes in its Kubernetes cluster and schedules
pods to run on those nodes.
Control Plane Static Pods
The core control plane components, the API
server and the controller manager components, run as
operated by the kubelet.
For masters that have etcd co-located on the same host, etcd is also moved to
static pods. RPM-based etcd is still supported on etcd hosts that are not also
In addition, the node components openshift-sdn and
openvswitch are now run using a DaemonSet instead of a systemd service.
Figure 1. Control plane host architecture changes
Even with control plane components running as static pods, master hosts still
source their configuration from the /etc/origin/master/master-config.yaml
file, as described in the
Master and Node Configuration topic.
Startup Sequence Overview
Hyperkube is a binary that contains all of Kubernetes (kube-apiserver, controller-manager, scheduler, proxy, and kubelet). On startup, the kubelet creates the kubepods.slice. Next, the kubelet creates the QoS-level slices burstable.slice and best-effort.slice inside the kubepods.slice. When a pod starts, the kubelet creats a pod-level slice with the format
pod<UUID-of-pod>.slice and passes that path to the runtime on the other side of the Container Runtime Interface (CRI). Docker or CRI-O then creates the container-level slices inside the pod-level slice.
The kubelet on master nodes automatically creates mirror pods on the API
server for each of the control plane static pods so that they are visible in the
cluster in the kube-system project. Manifests for these static pods are
installed by default by the openshift-ansible installer, located in the
/etc/origin/node/pods directory on the master host.
These pods have the following
hostPath volumes defined:
Contains all certificates, configuration files, and the admin.kubeconfig file.
Contains volumes and potential core dumps of the binary.
Contains cloud provider specific configuration (AWS, Azure, etc.).
Contains additional third party volume plug-ins.
Contains additional third party volume plug-ins for system containers.
The set of operations you can do on the static pods is limited. For example:
$ oc logs master-api-<hostname> -n kube-system
returns the standard output from the API server. However:
$ oc delete pod master-api-<hostname> -n kube-system
will not actually delete the pod.
As another example, a cluster administrator might want to perform a common
operation, such as increasing the
loglevel of the API server to provide more
verbose data if a problem occurs. You must edit the
/etc/origin/master/master.env file, where the
--loglevel parameter in the
OPTIONS variable can be modified, because this value is passed to the process running
inside the container. Changes require a restart of the process running inside
Restarting Master Services
To restart control plane services running in control plane static pods, use the
master-restart command on the master host.
To restart the master API:
To restart the controllers:
# master-restart controllers
Viewing Master Service Logs
To view logs for control plane services running in control plane static pods,
master-logs command for the respective component:
# master-logs controllers controllers
High Availability Masters
You can optionally configure your masters for high
availability (HA) to ensure that the cluster has no single point of failure.
To mitigate concerns about availability of the master, two activities are
A runbook entry should be created for
reconstructing the master. A runbook entry is a necessary backstop for any
highly-available service. Additional solutions merely control the frequency
that the runbook must be consulted. For example, a cold standby of the master
host can adequately fulfill SLAs that require no more than minutes of downtime
for creation of new applications or recovery of failed application components.
Use a high availability solution to configure your masters and ensure that the
cluster has no single point of failure. The
installation documentation provides specific examples using the
native HA method and
configuring HAProxy. You can also take the concepts and apply them towards your
existing HA solutions using the
native method instead of HAProxy.
In production OpenShift Container Platform clusters, you must maintain high availability
of the API Server load balancer. If the API Server load balancer is not
available, nodes cannot report their status, all their pods are marked dead,
and the pods' endpoints are removed from the service.
In addition to configuring HA for OpenShift Container Platform, you must separately configure
HA for the API Server load balancer. To configure HA, it is much preferred to
integrate an enterprise load balancer (LB) such as an F5 Big-IP™ or a Citrix
Netscaler™ appliance. If such solutions are not available, it is possible to
run multiple HAProxy load balancers and use Keepalived to provide a floating
virtual IP address for HA. However, this solution is not recommended for
When using the
native HA method with HAProxy, master components have the
Table 2. Availability Matrix with HAProxy
Fully redundant deployment with load balancing.
Can be installed on separate hosts or collocated on master hosts.
Managed by HAProxy.
Controller Manager Server
One instance is elected as a cluster leader at a time.
Balances load between API master endpoints.
While clustered etcd requires an odd number of hosts for quorum, the master
services have no quorum or requirement that they have an odd number of hosts.
However, since you need at least two master services for HA, it is common to
maintain a uniform odd number of hosts when collocating master services and