$ oadm ipfailover [<Ip_failover_config_name>] <options> --replicas=<n>
This topic describes how to set up highly-available services on your OpenShift cluster.
The Kubernetes replication controller ensures that the deployment requirements, in particular the number of replicas, are satisfied when the appropriate resources are available. When run with two or more replicas, the router can be resilient to failures, providing a highly-available service. Depending on how the router instances are discovered (via a service, DNS entry, or IP addresses), this could impose operational requirements to handle failure cases when one or more router instances are "unreachable".
For some IP-based traffic services, virtual IP addresses (VIPs) should always be serviced for as long as a single instance is available. This simplifies the operational overhead and handles failure cases gracefully.
Even though a service is highly available, performance can still be affected. |
Use cases for high-availability include:
I want my cluster to be assigned a resource set and I want the cluster to automatically manage those resources.
I want my cluster to be assigned a set of VIPs that the cluster manages and migrates (with zero or minimal downtime) on failure conditions, and I should not be required to perform any manual interactions to update the upstream "discovery" sources (e.g., DNS). The cluster should service all the assigned VIPs when at least a single node is available, despite the current available resources not being sufficient to reach the desired state.
You can configure a highly-available router or network setup by running multiple instances of the pod and fronting them with a balancing tier. This can be something as simple as DNS round robin, or as complex as multiple load-balancing layers.
Using IP failover involves switching IP addresses to a redundant or stand-by set of nodes on failure conditions.
The oadm ipfailover
command helps set up the VIP failover configuration. As
an administrator, you can configure IP failover on an entire cluster, or on a
subset of nodes, as defined by the labeled selector. If you are running in
production, match the labeled selector with at least two nodes to ensure you
have failover protection and provide a --replicas=<n>
value that matches the
number of nodes for the given labeled selector:
$ oadm ipfailover [<Ip_failover_config_name>] <options> --replicas=<n>
The oadm ipfailover
command ensures that a failover pod runs on each of
the nodes matching the constraints or label used. This pod uses VRRP (Virtual
Router Redundancy Protocol) with Keepalived to ensure that the service on the
watched port is available, and, if needed, Keepalived will automatically float
the VIPs if the service is not available.
Keepalived manages a set of virtual IP addresses. The administrator must make sure that all these addresses:
Are accessible on the configured hosts from outside the cluster.
Are not used for any other purpose within the cluster.
Keepalived on each node determines whether the needed service is running. If it is, VIPs are supported and Keepalived participates in the negotiation to determine which node will serve the VIP. For a node to participate, the service must be listening on the watch port on a VIP or the check must be disabled.
Each VIP in the set may end up being served by a different node. |
Option | Variable Name | Notes |
---|---|---|
|
|
The list of IP address ranges to replicate. This must be provided. (For example, 1.2.3.4-6,1.2.3.9.) |
The following steps describe how to set up a highly-available router environment with IP failover:
Label the nodes for the service. This step can be optional if you run the service on any of the nodes in your Kubernetes cluster and use VIPs that can float within those nodes. This process may already exist within a complex cluster, in that nodes may be filtered by any constraints or requirements specified (e.g., nodes with SSD drives, or higher CPU, memory, or disk requirements, etc.).
The following example defines a label as router instances that are servicing traffic in the US west geography ha-router=geo-us-west:
$ oc label nodes openshift-node-{5,6,7,8,9} "ha-router=geo-us-west"
OpenShift’s ipfailover internally uses keepalived, so ensure that
multicast is enabled on the nodes labeled above and that the nodes can accept
network traffic for 224.0.0.18 (the VRRP multicast IP address). Depending on
your environment’s multicast configuration, you may need to add an iptables
rule to each of the above labeled nodes. If you do need to add the iptables
rules, please also ensure that the rules persist after a system restart:
$ for node in openshift-node-{5,6,7,8,9}; do ssh $node <<EOF export interface=${interface:-"eth0"} echo "Check multicast enabled ... "; ifconfig $interface | grep -i MULTICAST echo "Check multicast groups ... " netstat -g -n | grep 224.0.0 | grep $interface echo "Optionally, add accept rule and persist it ... " sudo /sbin/iptables -I INPUT -i $interface -d 224.0.0.18/32 -j ACCEPT echo "Please ensure the above rule is added on system restarts." EOF done;
Depending on your environment policies, you can either reuse the router service account created previously or create a new ipfailover service account.
Ensure that either the router service account exists as described in Deploying a Router or create a new ipfailover service account. The example below creates a new service account with the name ipfailover:
$ echo ' { "kind": "ServiceAccount", "apiVersion": "v1", "metadata": { "name": "ipfailover" } } ' | oc create -f -
You can manually edit the privileged SCC and add the
ipfailover service account, or you can script editing the privileged SCC if
you have jq
installed.
To manually edit the privileged SCC, run:
$ oc edit scc privileged
Then add the ipfailover service account in form
system:serviceaccount:<project>:<name> to the users
section:
... users: - system:serviceaccount:openshift-infra:build-controller - system:serviceaccount:default:router - system:serviceaccount:default:ipfailover
Alternatively, to script editing privileged SCC if you have jq
installed,
run:
$ oc get scc privileged -o json | jq '.users |= .+ ["system:serviceaccount:default:ipfailover"]' | oc replace scc -f -
Start the router with at least two replicas on nodes matching the labels used in the first step. The following example runs three instances using the ipfailover service account:
$ oadm router ha-router-us-west --replicas=3 \ --selector="ha-router=geo-us-west" --labels="ha-router=geo-us-west" \ --credentials="$KUBECONFIG" --service-account=ipfailover
The above command runs fewer router replicas than available nodes, so that, in the chance of node failures, Kubernetes can still ensure three available instances until the number of available nodes labeled ha-router=geo-us-west is below three. Additionally, the router uses the host network as well as ports 80 and 443, so fewer number of replicas are running to ensure a higher Service Level Availability (SLA). If there are no constraints on the service being setup for failover, it is possible to target the service to run on one or more, or even all, of the labeled nodes. |
Finally, configure the VIPs and failover for the nodes labeled with ha-router=geo-us-west in the first step. Ensure the number of replicas match the number of nodes and that they satisfy the label setup in the first step. The name of the ipfailover configuration (ipf-ha-router-us-west in the example below) should be different from the name of the router configuration (ha-router-us-west) as both the router and ipfailover create deployment configuration with those names. Specify the VIPs addresses and the port number that ipfailover should monitor on the desired instances:
$ oadm ipfailover ipf-ha-router-us-west --replicas=5 --watch-port=80 \ --selector="ha-router=geo-us-west" --virtual-ips="10.245.2.101-105" \ --credentials="$KUBECONFIG" --service-account=ipfailover --create
The following steps describe how to set up a highly-available IP-based network service with IP failover:
Label the nodes for the service. This step can be optional if you run the service on any of the nodes in your Kubernetes cluster and use VIPs that can float within those nodes. This process may already exist within a complex cluster, in that the nodes may be filtered by any constraints or requirements specified (e.g., nodes with SSD drives, or higher CPU, memory, or disk requirements, etc.).
The following example labels a highly-available cache service that is listening on port 9736 as ha-cache=geo:
$ oc label nodes openshift-node-{6,3,7,9} "ha-cache=geo"
OpenShift’s ipfailover internally uses keepalived, so ensure that
multicast is enabled on the nodes labeled above and that the nodes can accept
network traffic for 224.0.0.18 (the VRRP multicast IP address). Depending on
your environment’s multicast configuration, you may need to add an iptables
rule to each of the above labeled nodes. If you do need to add the iptables
rules, please also ensure that the rules persist after a system restart:
$ for node in openshift-node-{6,3,7,9}; do ssh $node <<EOF export interface=${interface:-"eth0"} echo "Check multicast enabled ... "; ifconfig $interface | grep -i MULTICAST echo "Check multicast groups ... " netstat -g -n | grep 224.0.0 | grep $interface echo "Optionally, add accept rule and persist it ... " sudo /sbin/iptables -I INPUT -i $interface -d 224.0.0.18/32 -j ACCEPT echo "Please ensure the above rule is added on system restarts." EOF done;
Create a new ipfailover service account:
$ echo ' { "kind": "ServiceAccount", "apiVersion": "v1", "metadata": { "name": "ipfailover" } } ' | oc create -f -
You can manually edit the privileged SCC and add the
ipfailover service account, or you can script editing the privileged SCC if
you have jq
installed.
To manually edit the privileged SCC, run:
$ oc edit scc privileged
Then add the ipfailover service account in form
system:serviceaccount:<project>:<name> to the users
section:
... users: - system:serviceaccount:openshift-infra:build-controller - system:serviceaccount:default:router - system:serviceaccount:default:ipfailover
Alternatively, to script editing privileged SCC if you have jq
installed,
run:
$ oc get scc privileged -o json | jq '.users |= .+ ["system:serviceaccount:default:ipfailover"]' | oc replace scc -f -
Run a geo-cache service with two or more replicas. An example configuration for running a geo-cache service is provided here.
Be sure to replace the myimages/geo-cache Docker image referenced in the file with your intended image. Also, change the number of replicas to the desired amount and ensure the label matches the one used in the first step. |
$ oc create -n <namespace> -f ./examples/geo-cache.json
Finally, configure the VIPs and failover for the nodes labeled with ha-cache=geo in the first step. Ensure the number of replicas match the number of nodes and that they satisfy the label setup in first step. Specify the VIP addresses and the port number that ipfailover should monitor for the desired instances:
$ oadm ipfailover ipf-ha-geo-cache --replicas=4 --selector="ha-cache=geo" \ --virtual-ips=10.245.2.101-104 --watch-port=9736 \ --credentials="$KUBECONFIG" --service-account=ipfailover --create
Using the above example, you can now use the VIPs 10.245.2.101 through 10.245.2.104 to send traffic to the geo-cache service. If a particular geo-cache instance is "unreachable", perhaps due to a node failure, Keepalived ensures that the VIPs automatically float amongst the group of nodes labeled "ha-cache=geo" and the service is still reachable via the virtual IP addresses.