$ oc -n openshift-kuryr edit cm kuryr-config
OpenShift Container Platform clusters on Red Hat OpenStack Platform (RHOSP) use Octavia to handle load balancer services. As a result of this choice, such clusters have a number of functional limitations.
RHOSP Octavia has two supported providers: Amphora and OVN. These providers differ in terms of available features as well as implementation details. These distinctions affect load balancer services that are created on your cluster.
You can set the external traffic policy (ETP) parameter, .spec.externalTrafficPolicy
, on a load balancer service to preserve the source IP address of incoming traffic when it reaches service endpoint pods. However, if your cluster uses the Amphora Octavia provider, the source IP of the traffic is replaced with the IP address of the Amphora VM. This behavior does not occur if your cluster uses the OVN Octavia provider.
Having the ETP
option set to Local
requires that health monitors be created for the load balancer. Without health monitors, traffic can be routed to a node that doesn’t have a functional endpoint, which causes the connection to drop. To force Cloud Provider OpenStack to create health monitors, you must set the value of the create-monitor
option in the cloud provider configuration to true
.
In RHOSP 16.1 and 16.2, the OVN Octavia provider does not support health monitors. Therefore, setting the ETP to local is unsupported.
In RHOSP 16.1 and 16.2, the Amphora Octavia provider does not support HTTP monitors on UDP pools. As a result, UDP load balancer services have UDP-CONNECT
monitors created instead. Due to implementation details, this configuration only functions properly with the OVN-Kubernetes CNI plugin. When the OpenShift SDN CNI plugin is used, the UDP services alive nodes are detected unreliably.
Use the .spec.loadBalancerSourceRanges
property to restrict the traffic that can pass through the load balancer according to source IP. This property is supported for use with the Amphora Octavia provider only. If your cluster uses the OVN Octavia provider, the option is ignored and traffic is unrestricted.
Kuryr is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. |
If your OpenShift Container Platform cluster uses Kuryr and was installed on a Red Hat OpenStack Platform (RHOSP) 13 cloud that was later upgraded to RHOSP 16, you can configure it to use the Octavia OVN provider driver.
Kuryr replaces existing load balancers after you change provider drivers. This process results in some downtime. |
Install the RHOSP CLI, openstack
.
Install the OpenShift Container Platform CLI, oc
.
Verify that the Octavia OVN driver on RHOSP is enabled.
To view a list of available Octavia drivers, on a command line, enter The |
To change from the Octavia Amphora provider driver to Octavia OVN:
Open the kuryr-config
ConfigMap. On a command line, enter:
$ oc -n openshift-kuryr edit cm kuryr-config
In the ConfigMap, delete the line that contains kuryr-octavia-provider: default
. For example:
...
kind: ConfigMap
metadata:
annotations:
networkoperator.openshift.io/kuryr-octavia-provider: default (1)
...
1 | Delete this line. The cluster will regenerate it with ovn as the value. |
Wait for the Cluster Network Operator to detect the modification and to redeploy the kuryr-controller
and kuryr-cni
pods. This process might take several minutes.
Verify that the kuryr-config
ConfigMap annotation is present with ovn
as its value. On a command line, enter:
$ oc -n openshift-kuryr edit cm kuryr-config
The ovn
provider value is displayed in the output:
...
kind: ConfigMap
metadata:
annotations:
networkoperator.openshift.io/kuryr-octavia-provider: ovn
...
Verify that RHOSP recreated its load balancers.
On a command line, enter:
$ openstack loadbalancer list | grep amphora
A single Amphora load balancer is displayed. For example:
a4db683b-2b7b-4988-a582-c39daaad7981 | ostest-7mbj6-kuryr-api-loadbalancer | 84c99c906edd475ba19478a9a6690efd | 172.30.0.1 | ACTIVE | amphora
Search for ovn
load balancers by entering:
$ openstack loadbalancer list | grep ovn
The remaining load balancers of the ovn
type are displayed. For example:
2dffe783-98ae-4048-98d0-32aa684664cc | openshift-apiserver-operator/metrics | 84c99c906edd475ba19478a9a6690efd | 172.30.167.119 | ACTIVE | ovn
0b1b2193-251f-4243-af39-2f99b29d18c5 | openshift-etcd/etcd | 84c99c906edd475ba19478a9a6690efd | 172.30.143.226 | ACTIVE | ovn
f05b07fc-01b7-4673-bd4d-adaa4391458e | openshift-dns-operator/metrics | 84c99c906edd475ba19478a9a6690efd | 172.30.152.27 | ACTIVE | ovn
OpenShift Container Platform clusters that run on Red Hat OpenStack Platform (RHOSP) can use the Octavia load balancing service to distribute traffic across multiple virtual machines (VMs) or floating IP addresses. This feature mitigates the bottleneck that single machines or addresses create.
If your cluster uses Kuryr, the Cluster Network Operator created an internal Octavia load balancer at deployment. You can use this load balancer for application network scaling.
If your cluster does not use Kuryr, you must create your own Octavia load balancer to use it for application network scaling.
If you want to use multiple API load balancers, or if your cluster does not use Kuryr, create an Octavia load balancer and then configure your cluster to use it.
Octavia is available on your Red Hat OpenStack Platform (RHOSP) deployment.
From a command line, create an Octavia load balancer that uses the Amphora driver:
$ openstack loadbalancer create --name API_OCP_CLUSTER --vip-subnet-id <id_of_worker_vms_subnet>
You can use a name of your choice instead of API_OCP_CLUSTER
.
After the load balancer becomes active, create listeners:
$ openstack loadbalancer listener create --name API_OCP_CLUSTER_6443 --protocol HTTPS--protocol-port 6443 API_OCP_CLUSTER
To view the status of the load balancer, enter |
Create a pool that uses the round robin algorithm and has session persistence enabled:
$ openstack loadbalancer pool create --name API_OCP_CLUSTER_pool_6443 --lb-algorithm ROUND_ROBIN --session-persistence type=<source_IP_address> --listener API_OCP_CLUSTER_6443 --protocol HTTPS
To ensure that control plane machines are available, create a health monitor:
$ openstack loadbalancer healthmonitor create --delay 5 --max-retries 4 --timeout 10 --type TCP API_OCP_CLUSTER_pool_6443
Add the control plane machines as members of the load balancer pool:
$ for SERVER in $(MASTER-0-IP MASTER-1-IP MASTER-2-IP)
do
openstack loadbalancer member create --address $SERVER --protocol-port 6443 API_OCP_CLUSTER_pool_6443
done
Optional: To reuse the cluster API floating IP address, unset it:
$ openstack floating ip unset $API_FIP
Add either the unset API_FIP
or a new address to the created load balancer VIP:
$ openstack floating ip set --port $(openstack loadbalancer show -c <vip_port_id> -f value API_OCP_CLUSTER) $API_FIP
Your cluster now uses Octavia for load balancing.
If Kuryr uses the Octavia Amphora driver, all traffic is routed through a single Amphora virtual machine (VM). You can repeat this procedure to create additional load balancers, which can alleviate the bottleneck. |
Kuryr is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. |
If your cluster uses Kuryr, associate the API floating IP address of your cluster with the pre-existing Octavia load balancer.
Your OpenShift Container Platform cluster uses Kuryr.
Octavia is available on your Red Hat OpenStack Platform (RHOSP) deployment.
Optional: From a command line, to reuse the cluster API floating IP address, unset it:
$ openstack floating ip unset $API_FIP
Add either the unset API_FIP
or a new address to the created load balancer VIP:
$ openstack floating ip set --port $(openstack loadbalancer show -c <vip_port_id> -f value ${OCP_CLUSTER}-kuryr-api-loadbalancer) $API_FIP
Your cluster now uses Octavia for load balancing.
If Kuryr uses the Octavia Amphora driver, all traffic is routed through a single Amphora virtual machine (VM). You can repeat this procedure to create additional load balancers, which can alleviate the bottleneck. |
Kuryr is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. |
You can use Octavia load balancers to scale Ingress controllers on clusters that use Kuryr.
Your OpenShift Container Platform cluster uses Kuryr.
Octavia is available on your RHOSP deployment.
To copy the current internal router service, on a command line, enter:
$ oc -n openshift-ingress get svc router-internal-default -o yaml > external_router.yaml
In the file external_router.yaml
, change the values of metadata.name
and spec.type
to
LoadBalancer
.
apiVersion: v1
kind: Service
metadata:
labels:
ingresscontroller.operator.openshift.io/owning-ingresscontroller: default
name: router-external-default (1)
namespace: openshift-ingress
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
- name: metrics
port: 1936
protocol: TCP
targetPort: 1936
selector:
ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default
sessionAffinity: None
type: LoadBalancer (2)
1 | Ensure that this value is descriptive, like router-external-default . |
2 | Ensure that this value is LoadBalancer . |
You can delete timestamps and other information that is irrelevant to load balancing. |
From a command line, create a service from the external_router.yaml
file:
$ oc apply -f external_router.yaml
Verify that the external IP address of the service is the same as the one that is associated with the load balancer:
On a command line, retrieve the external IP address of the service:
$ oc -n openshift-ingress get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
router-external-default LoadBalancer 172.30.235.33 10.46.22.161 80:30112/TCP,443:32359/TCP,1936:30317/TCP 3m38s
router-internal-default ClusterIP 172.30.115.123 <none> 80/TCP,443/TCP,1936/TCP 22h
Retrieve the IP address of the load balancer:
$ openstack loadbalancer list | grep router-external
| 21bf6afe-b498-4a16-a958-3229e83c002c | openshift-ingress/router-external-default | 66f3816acf1b431691b8d132cc9d793c | 172.30.235.33 | ACTIVE | octavia |
Verify that the addresses you retrieved in the previous steps are associated with each other in the floating IP list:
$ openstack floating ip list | grep 172.30.235.33
| e2f80e97-8266-4b69-8636-e58bacf1879e | 10.46.22.161 | 172.30.235.33 | 655e7122-806a-4e0a-a104-220c6e17bda6 | a565e55a-99e7-4d15-b4df-f9d7ee8c9deb | 66f3816acf1b431691b8d132cc9d793c |
You can now use the value of EXTERNAL-IP
as the new Ingress address.
If Kuryr uses the Octavia Amphora driver, all traffic is routed through a single Amphora virtual machine (VM). You can repeat this procedure to create additional load balancers, which can alleviate the bottleneck. |
You can configure an OpenShift Container Platform cluster on Red Hat OpenStack Platform (RHOSP) to use an external load balancer in place of the default load balancer.
You can also configure an OpenShift Container Platform cluster to use an external load balancer that supports multiple subnets. If you use multiple subnets, you can explicitly list all the IP addresses in any networks that are used by your load balancer targets. This configuration can reduce maintenance overhead because you can create and destroy nodes within those networks without reconfiguring the load balancer targets.
If you deploy your ingress pods by using a machine set on a smaller network, such as a /27
or /28
, you can simplify your load balancer targets.
You do not need to specify API and Ingress static addresses for your installation program. If you choose this configuration, you must take additional actions to define network targets that accept an IP address from each referenced vSphere subnet. |
On your load balancer, TCP over ports 6443, 443, and 80 must be reachable by all users of your system that are located outside the cluster.
Load balance the application ports, 443 and 80, between all the compute nodes.
Load balance the API port, 6443, between each of the control plane nodes.
On your load balancer, port 22623, which is used to serve ignition startup configurations to nodes, is not exposed outside of the cluster.
Your load balancer can access the required ports on each node in your cluster. You can ensure this level of access by completing the following actions:
The API load balancer can access ports 22623 and 6443 on the control plane nodes.
The ingress load balancer can access ports 443 and 80 on the nodes where the ingress pods are located.
External load balancing services and the control plane nodes must run on the same L2 network, and on the same VLAN when using VLANs to route traffic between the load balancing services and the control plane nodes. |
Enable access to the cluster from your load balancer on ports 6443, 443, and 80.
As an example, note this HAProxy configuration:
...
listen my-cluster-api-6443
bind 0.0.0.0:6443
mode tcp
balance roundrobin
server my-cluster-master-2 192.0.2.2:6443 check
server my-cluster-master-0 192.0.2.3:6443 check
server my-cluster-master-1 192.0.2.1:6443 check
listen my-cluster-apps-443
bind 0.0.0.0:443
mode tcp
balance roundrobin
server my-cluster-worker-0 192.0.2.6:443 check
server my-cluster-worker-1 192.0.2.5:443 check
server my-cluster-worker-2 192.0.2.4:443 check
listen my-cluster-apps-80
bind 0.0.0.0:80
mode tcp
balance roundrobin
server my-cluster-worker-0 192.0.2.7:80 check
server my-cluster-worker-1 192.0.2.9:80 check
server my-cluster-worker-2 192.0.2.8:80 check
Add records to your DNS server for the cluster API and apps over the load balancer. For example:
<load_balancer_ip_address> api.<cluster_name>.<base_domain>
<load_balancer_ip_address> apps.<cluster_name>.<base_domain>
From a command line, use curl
to verify that the external load balancer and DNS configuration are operational.
Verify that the cluster API is accessible:
$ curl https://<loadbalancer_ip_address>:6443/version --insecure
If the configuration is correct, you receive a JSON object in response:
{
"major": "1",
"minor": "11+",
"gitVersion": "v1.11.0+ad103ed",
"gitCommit": "ad103ed",
"gitTreeState": "clean",
"buildDate": "2019-01-09T06:44:10Z",
"goVersion": "go1.10.3",
"compiler": "gc",
"platform": "linux/amd64"
}
Verify that cluster applications are accessible:
You can also verify application accessibility by opening the OpenShift Container Platform console in a web browser. |
$ curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure
If the configuration is correct, you receive an HTTP response:
HTTP/1.1 302 Found
content-length: 0
location: https://console-openshift-console.apps.<cluster-name>.<base domain>/
cache-control: no-cacheHTTP/1.1 200 OK
referrer-policy: strict-origin-when-cross-origin
set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure
x-content-type-options: nosniff
x-dns-prefetch-control: off
x-frame-options: DENY
x-xss-protection: 1; mode=block
date: Tue, 17 Nov 2020 08:42:10 GMT
content-type: text/html; charset=utf-8
set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None
cache-control: private