$ oadm policy add-cluster-role-to-user \ cluster-reader \ system:serviceaccount:default:router
The OpenShift Enterprise router is the ingress point for all external traffic destined for services in your OpenShift installation. OpenShift provides and supports the following two router plug-ins:
The HAProxy template router is the default plug-in. It uses the openshift3/ose-haproxy-router image to run an HAProxy instance alongside the template router plug-in inside a container on OpenShift Enterprise. It currently supports HTTP(S) traffic and TLS-enabled traffic via SNI. The router’s container listens on the host network interface, unlike most containers that listen only on private IPs. The router proxies external requests for route names to the IPs of actual pods identified by the service associated with the route.
The F5 router integrates with an existing F5 BIG-IP® system in your environment to synchronize routes. F5 BIG-IP® version 11.4 or newer is required in order to have the F5 iControl REST API.
The F5 router plug-in is available starting in OpenShift Enterprise 3.0.2. |
Before deploying an OpenShift Enterprise cluster, you must have a service account for the router. Starting in OpenShift Enterprise 3.1, a router service account is automatically created during a quick or advanced installation (previously, this required manual creation). This service account has permissions to a security context constraint (SCC) that allows it to specify host ports.
Use of labels (e.g., to define router shards)
requires cluster-reader
permission.
$ oadm policy add-cluster-role-to-user \ cluster-reader \ system:serviceaccount:default:router
The oadm router
command is provided with the administrator CLI to simplify the
tasks of setting up routers in a new installation. If you followed the
quick installation, then
a default router was automatically created for you. The oadm router
command
creates the service and deployment configuration objects. Just about every form
of communication between OpenShift Enterprise components is secured by TLS and uses
various certificates and authentication methods. Use the --credentials
option
to specify what credentials the router should use to contact the master.
Routers directly attach to port 80 and 443 on all interfaces on a host. Restrict routers to hosts where port 80/443 is available and not being consumed by another service, and set this using node selectors and the scheduler configuration. As an example, you can achieve this by dedicating infrastructure nodes to run services such as routers. |
It is recommended to use separate distinct openshift-router credentials
with your router. The credentials can be provided using the $ oadm router --dry-run --service-account=router \ --credentials='/etc/origin/master/openshift-router.kubeconfig' (1)
|
Router pods created using |
The default router service account, named router, is automatically created during quick and advanced installations. To verify that this account already exists:
$ oadm router --dry-run \ --credentials='/etc/origin/master/openshift-router.kubeconfig' \ --service-account=router
To see what the default router would look like if created:
$ oadm router -o yaml \ --credentials='/etc/origin/master/openshift-router.kubeconfig' \ --service-account=router
The quick installation process automatically creates a default router. To create a router if it does not exist:
$ oadm router <router_name> --replicas=<number> \ --credentials='/etc/origin/master/openshift-router.kubeconfig' \ --service-account=router
To deploy the router to any node(s) that match a specified node label:
$ oadm router <router_name> --replicas=<number> --selector=<label> \ --credentials='/etc/origin/master/openshift-router.kubeconfig' \ --service-account=router
For example, if you want to create a router named router
and have it placed on a node labeled with region=infra
:
$ oadm router router --replicas=1 --selector='region=infra' \ --credentials='/etc/origin/master/openshift-router.kubeconfig' \ --service-account=router
During
advanced installation,
the openshift_hosted_router_selector
and openshift_registry_selector
Ansible settings are set to region=infra by default. The default router and
registry will only be automatically deployed if a node exists that matches the
region=infra label.
Multiple instances are created on different hosts according to the scheduler policy.
To deploy the router to any node(s) that match a specified node label:
$ oadm router <router_name> --replicas=<number> --selector=<label> \ --service-account=router
For example, if you want to create a router named router
and have it placed on
a node labeled with region=infra
:
$ oadm router router --replicas=1 --selector='region=infra' \ --service-account=router
To use a different router image and view the router configuration that would be used:
$ oadm router <router_name> -o <format> --images=<image> \ --credentials='/etc/origin/master/openshift-router.kubeconfig' \ --service-account=router
For example:
$ oadm router region-west -o yaml --images=myrepo/somerouter:mytag \ --credentials='/etc/origin/master/openshift-router.kubeconfig' \ --service-account=router
You can set up a highly-available router on your OpenShift Enterprise cluster using IP failover.
You can customize the service ports that a template router binds to by setting
the environment variables ROUTER_SERVICE_HTTP_PORT
and
ROUTER_SERVICE_HTTPS_PORT
. This can be done by creating a template router,
then editing its deployment configuration.
The following example creates a router deployment with 0
replicas and
customizes the router service HTTP and HTTPS ports, then scales it
appropriately (to 1
replica).
$ oadm router --replicas=0 --ports='10080:10080,10443:10443' (1) $ oc set env dc/router ROUTER_SERVICE_HTTP_PORT=10080 \ ROUTER_SERVICE_HTTPS_PORT=10443 $ oc scale dc/router --replicas=1
1 | Ensures exposed ports are appropriately set for routers that use the
container networking mode --host-network=false . |
If you do customize the template router service ports, you will also need to
ensure that the nodes where the router pods run have those custom ports opened
in the firewall (either via Ansible or |
The following is an example using iptables
to open the custom router service
ports.
$ iptables -A INPUT -p tcp --dport 10080 -j ACCEPT $ iptables -A INPUT -p tcp --dport 10443 -j ACCEPT
An administrator can create multiple routers with the same definition to serve the same set of routes. By having different groups of routers with different namespace or route selectors, they can vary the routes that the router serves.
Multiple routers can be grouped to distribute routing load in the cluster
and separate tenants to different routers or
shards.
Each router or shard in the group handles routes
based on the selectors in the router.
An administrator can create shards over the whole cluster using ROUTE_LABELS
.
A user can create shards over a namespace (project) by using NAMESPACE_LABELS
.
Making specific routers deploy on specific nodes requires two steps:
Add a label to the desired node:
$ oc label node 10.254.254.28 "router=first"
Add a node selector to the router deployment configuration:
$ oc edit dc <deploymentConfigName>
Add the template.spec.nodeSelector
field with a key and value
corresponding to the label:
... template: metadata: creationTimestamp: null labels: router: router1 spec: nodeSelector: (1) router: "first" ...
1 | The key and value are router and first , respectively,
corresponding to the router=first label. |
The access controls are based on the service account that the router is run with.
Using NAMESPACE_LABELS
and/or ROUTE_LABELS
, a router can filter out the
namespaces and/or routes that it should service.
This enables you to partition routes amongst multiple router deployments
effectively distributing the set of routes.
Example:
A router deployment finops-router
is run with route selector
NAMESPACE_LABELS="name in (finance, ops)"
and a router deployment dev-router
is run with route selector
NAMESPACE_LABELS="name=dev"
.
If all routes are in the 3 namespaces finance
, ops
or dev
,
then this could effectively distribute our routes across two
router deployments.
In the above scenario, sharding becomes a special case of partitioning with no overlapping sets. Routes are divided amongst multiple router shards.
The criteria for route selection governs how the routes are distributed. It is possible to have routes that overlap accross multiple router deployments.
Example:
In addition to the finops-router
and dev-router
in the example
above, we also have an devops-router
which is run with a route
selector NAMESPACE_LABELS="name in (dev, ops)"
.
The routes in namespaces dev
or ops
now are serviced by two different
router deployments.
This becomes a case where we have partitioned the
routes with an overlapping set.
In addition, this enables us to create more complex routing rules ala
divert high priority traffic to the dedicated finops-router
but send
the lower priority ones to the devops-router
.
NAMESPACE_LABELS
allows filtering the projects to service and selecting
all the routes from those projects.
But we may want to partition routes
based on other criteria in the routes themselves.
The ROUTE_LABELS
selector allows you to slice-and-dice the routes themselves.
Example:
A router deployment prod-router
is run with route selector
ROUTE_LABELS="mydeployment=prod"
and a router deployment devtest-router
is run with route selector
ROUTE_LABELS="mydeployment in (dev, test)"
Example assumes you have all the routes you wish to serviced tagged with a
label "mydeployment=<tag>"
.
Router sharding lets you select how routes are distributed among a set of routers.
Router sharding is
based on labels;
you set labels on the routes in the pool,
and express the desired subset of those routes for the router to serve
with a selection expression via the oc set env
command.
First, ensure that service account associated with the router has the
cluster reader
permission.
The rest of this section describes an extended example.
Suppose there are 26 routes, named a
— z
,
in the pool, with various labels:
sla=high geo=east hw=modest dept=finance sla=medium geo=west hw=strong dept=dev sla=low dept=ops
These labels express the concepts: service level agreement, geographical location, hardware requirements, and department. The routes in the pool can have at most one label from each column. Some routes may have other labels, entirely, or none at all.
Name(s) | SLA | Geo | HW | Dept | Other Labels |
---|---|---|---|---|---|
|
|
|
|
|
|
|
|
|
|
||
|
|
|
|
||
|
|
|
|
||
|
|
|
|
||
|
|
|
Here is a convenience script mkshard that
ilustrates how oadm router
, oc set env
, and oc scale
work together to make a router shard.
#!/bin/bash
# Usage: mkshard ID SELECTION-EXPRESSION
id=$1
sel="$2"
router=router-shard-$id (1)
oadm router $router --replicas=0 (2)
dc=dc/router-shard-$id (3)
oc set env $dc ROUTE_LABELS="$sel" (4)
oc scale $dc --replicas=3 (5)
1 | The created router has name router-shard-<id> . |
2 | Specify no scaling for now. |
3 | The deployment configuration for the router. |
4 | Set the selection expression using oc set env .
The selection expression is the value of
the ROUTE_LABELS environment variable. |
5 | Scale it up. |
Running mkshard several times creates several routers:
Router | Selection Expression | Routes |
---|---|---|
|
|
|
|
|
|
|
|
|
Because a router shard is a construct
based on labels,
you can modify either the labels (via
oc label
)
or the selection expression.
This section extends the example started in the Creating Router Shards section, demonstrating how to change the selection expression.
Here is a convenience script modshard that modifies an existing router to use a new selection expression:
#!/bin/bash
# Usage: modshard ID SELECTION-EXPRESSION...
id=$1
shift
router=router-shard-$id (1)
dc=dc/$router (2)
oc scale $dc --replicas=0 (3)
oc set env $dc "$@" (4)
oc scale $dc --replicas=3 (5)
1 | The modified router has name router-shard-<id> . |
2 | The deployment configuration where the modifications occur. |
3 | Scale it down. |
4 | Set the new selection expression using oc set env .
Unlike mkshard from the
Creating Router Shards
section, the selection expression specified as the
non-ID arguments to modshard must include the
environment variable name as well as its value. |
5 | Scale it back up. |
In |
For example, to expand the department for router-shard-3
to include ops
as well as dev
:
$ modshard 3 ROUTE_LABELS='dept in (dev, ops)'
The result is that router-shard-3
now selects routes g
— s
(the combined sets of g
— k
and l
— s
).
This example takes into account that there are only three departments in this example scenario, and specifies a department to leave out of the shard, thus achieving the same result as the preceding example:
$ modshard 3 ROUTE_LABELS='dept != finanace'
This example specifies shows three comma-separated qualities,
and results in only route b
being selected:
$ modshard 3 ROUTE_LABELS='hw=strong,type=dynamic,geo=west'
Similarly to ROUTE_LABELS
, which involve a route’s labels,
you can select routes based on the labels of the route’s namespace labels,
with the NAMESPACE_LABELS
environment variable.
This example modifies router-shard-3
to serve
routes whose namespace has the label frequency=weekly
:
$ modshard 3 NAMESPACE_LABELS='frequency=weekly'
The last example combines ROUTE_LABELS
and NAMESPACE_LABELS
to select routes with label sla=low
and
whose namespace has the label frequency=weekly
:
$ modshard 3 \ NAMESPACE_LABELS='frequency=weekly' \ ROUTE_LABELS='sla=low'
The routes for a project can be handled by a selected router by using
NAMESPACE_LABELS
.
The router is given a selector for a NAMESPACE_LABELS
label and the project that wants to use the router applies the NAMESPACE_LABELS
label to its namespace.
First, ensure that service account associated with the router has the
cluster reader
permission.
This permits the router to read the labels that are applied to the namespaces.
Now create and label the router:
$ oadm router ... --service-account=router $ oc set env dc/router NAMESPACE_LABELS="router=r1"
Because the router has a selector for a namespace, the router will handle routes for that namespace. So, for example:
$ oc label namespace default "router=r1"
Now create routes in the default namespace, and the route is available in the default router:
$ oc create -f route1.yaml
Now create a new project (namespace) and create a route, route2.
$ oc new-project p1 $ oc create -f route2.yaml
And notice the route is not available in your router. Now label namespace p1 with "router=r1"
$ oc label namespace p1 "router=r1"
Which makes the route available to the router.
Note that removing the label from the namespace won’t have immediate effect (as we don’t see the updates in the router), so if you redeploy/start a new router pod, you should see the unlabelled effects.
$ oc scale dc/router --replicas=0 && oc scale dc/router --replicas=1
When exposing a service, a user can use the same route from the DNS name that external users use to access the application. The network administrator of the external network must make sure the host name resolves to the name of a router that has admitted the route. The user can set up their DNS with a CNAME that points to this host name. However, the user may not know the host name of the router. When it is not known, the cluster administrator can provide it.
The cluster administrator can use the --router-canonical-hostname
option with
the router’s canonical host name when creating the router. For example:
# oadm router myrouter --router-canonical-hostname="rtr.example.com"
This creates the ROUTER_CANONCAL_HOSTNAME
environment variable in the router’s
deployment configuration containing the host name of the router.
For routers that already exist, the cluster administrator can edit the router’s
deployment configuration and add the ROUTER_CANONICAL_HOSTNAME
environment
variable:
spec: template: spec: containers: - env: - name: ROUTER_CANONCAL_HOSTNAME value: rtr.example.com
The ROUTER_CANONICAL_HOSTNAME
value is displayed in the route status for all
routers that have admitted the route. The route status is refreshed every time
the router is reloaded.
When a user creates a route, all of the active routers evaluate the route and,
if conditions are met, admit it. When a router that defines the
ROUTER_CANONCAL_HOSTNAME
environment variable admits the route, the router
places the value in the routerCanonicalHostname
field in the route status. The
user can examine the route status to determine which, if any, routers have
admitted the route, select a router from the list, and find the
host name of the router to pass along to the network administrator.
status: ingress: conditions: lastTransitionTime: 2016-12-07T15:20:57Z status: "True" type: Admitted host: hello.in.mycloud.com routerCanonicalHostname: rtr.example.com routerName: myrouter wildcardPolicy: None
oc describe
inclues the host name when available:
$ oc describe route/hello-route3 ... Requested Host: hello.in.mycloud.com exposed on router myroute (host rtr.example.com) 12 minutes ago
Using the above information, the user can ask the DNS administrator to set up a
CNAME from the route’s host, hello.in.mycloud.com
, to the router’s canonical
hostname, rtr.example.com
. This results in any traffic to
hello.in.mycloud.com
reaching the user’s application.
You can customize the default routing subdomain by modifying the master configuration file. Routes that do not specify a host name would have one generated using this default routing subdomain.
You can customize the suffix used as the default routing subdomain for your environment using the master configuration file (the /etc/origin/master/master-config.yaml file by default).
The following example shows how you can set the configured suffix to v3.openshift.test:
routingConfig: subdomain: v3.openshift.test
This change requires a restart of the master if it is running. |
With the OpenShift Enterprise master(s) running the above configuration, the generated host name for the example of a route named no-route-hostname without a host name added to a namespace mynamespace would be:
no-route-hostname-mynamespace.v3.openshift.test
If an administrator wants to restrict all routes to a specific routing
subdomain, they can pass the --force-subdomain
option to the oadm
router
command. This forces the router to override any host names specified in
a route and generate one based on the template provided to the
--force-subdomain
option.
The following example runs a router, which overrides the route host names using
a custom subdomain template ${name}-${namespace}.apps.example.com
.
$ oadm router --force-subdomain='${name}-${namespace}.apps.example.com'
A TLS-enabled route that does not include a certificate uses the router’s default certificate instead. In most cases, this certificate should be provided by a trusted certificate authority, but for convenience you can use the OpenShift Enterprise CA to create the certificate. For example:
$ CA=/etc/origin/master $ oadm ca create-server-cert --signer-cert=$CA/ca.crt \ --signer-key=$CA/ca.key --signer-serial=$CA/ca.serial.txt \ --hostnames='*.cloudapps.example.com' \ --cert=cloudapps.crt --key=cloudapps.key
The router expects the certificate and key to be in PEM format in a single file:
$ cat cloudapps.crt cloudapps.key $CA/ca.crt > cloudapps.router.pem
From there you can use the --default-cert
flag:
$ oadm router --default-cert=cloudapps.router.pem --service-account=router \ --credentials=${ROUTER_KUBECONFIG:-"$KUBECONFIG"}
Browsers only consider wildcards valid for subdomains one level deep. So in this example, the certificate would be valid for a.cloudapps.example.com but not for a.b.cloudapps.example.com. |
Currently, password protected key files are not supported. HAProxy prompts for a password upon starting and does not have a way to automate this process. To remove a passphrase from a keyfile, you can run:
# openssl rsa -in <passwordProtectedKey.key> -out <new.key>
Here is an example of how to use a secure edge terminated route with TLS termination occurring on the router before traffic is proxied to the destination. The secure edge terminated route specifies the TLS certificate and key information. The TLS certificate is served by the router front end.
First, start up a router instance:
# oadm router --replicas=1 --service-account=router \ --credentials=${ROUTER_KUBECONFIG:-"$KUBECONFIG"}
Next, create a private key, csr and certificate for our edge secured route.
The instructions on how to do that would be specific to your certificate
authority and provider. For a simple self-signed certificate for a domain
named www.example.test
, see the example shown below:
# sudo openssl genrsa -out example-test.key 2048 # # sudo openssl req -new -key example-test.key -out example-test.csr \ -subj "/C=US/ST=CA/L=Mountain View/O=OS3/OU=Eng/CN=www.example.test" # # sudo openssl x509 -req -days 366 -in example-test.csr \ -signkey example-test.key -out example-test.crt
Generate a route using the above certificate and key.
$ oc create route edge --service=my-service \ --hostname=www.example.test \ --key=example-test.key --cert=example-test.crt route "my-service" created
Look at its definition.
$ oc get route/my-service -o yaml apiVersion: v1 kind: Route metadata: name: my-service spec: host: www.example.test to: kind: Service name: my-service tls: termination: edge key: | -----BEGIN PRIVATE KEY----- [...] -----END PRIVATE KEY----- certificate: | -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE-----
Make sure your DNS entry for www.example.test
points to your router
instance(s) and the route to your domain should be available.
The example below uses curl along with a local resolver to simulate the
DNS lookup:
# routerip="4.1.1.1" # replace with IP address of one of your router instances. # curl -k --resolve www.example.test:443:$routerip https://www.example.test/
The OpenShift Enterprise router runs inside a container and the default behavior is to use the network stack of the host (i.e., the node where the router container runs). This default behavior benefits performance because network traffic from remote clients does not need to take multiple hops through user space to reach the target service and container.
Additionally, this default behavior enables the router to get the actual source IP address of the remote connection rather than getting the node’s IP address. This is useful for defining ingress rules based on the originating IP, supporting sticky sessions, and monitoring traffic, among other uses.
This host network behavior is controlled by the --host-network
router command
line option, and the default behaviour is the equivalent of using
--host-network=true
. If you wish to run the router with the container network
stack, use the --host-network=false
option when creating the router. For
example:
$ oadm router \ --credentials='/etc/origin/master/openshift-router.kubeconfig' \ --service-account=router \ --host-network=false
Internally, this means the router container must publish the 80 and 443 ports in order for the external network to communicate with the router.
Running with the container network stack means that the router sees the source IP address of a connection to be the NATed IP address of the node, rather than the actual remote IP address. |
On OpenShift Enterprise clusters using
multi-tenant
network isolation, routers on a non-default namespace with the
|
Using the --metrics-image
and --expose-metrics
options, you can configure
the OpenShift Enterprise router to run a sidecar container that exposes or publishes
router metrics for consumption by external metrics collection and aggregation
systems (e.g. Prometheus, statsd).
Depending on your router implementation, the image is appropriately set up and
the metrics sidecar container is started when the router is deployed. For
example, the HAProxy-based router implementation defaults to using the
prom/haproxy-exporter
image to run as a sidecar container, which can then be
used as a metrics datasource by the Prometheus server.
The |
Grab the HAProxy Prometheus exporter image from the Docker registry:
$ sudo docker pull prom/haproxy-exporter
Create the OpenShift Enterprise router:
$ oadm router \ --credentials='/etc/origin/master/openshift-router.kubeconfig' \ --service-account=router --expose-metrics
Or, optionally, use the --metrics-image
option to override the HAProxy
defaults:
$ oadm router \ --credentials='/etc/origin/master/openshift-router.kubeconfig' \ --service-account=router --expose-metrics \ --metrics-image=prom/haproxy-exporter
Once the haproxy-exporter containers (and your HAProxy router) have started, point Prometheus to the sidecar container on port 9101 on the node where the haproxy-exporter container is running:
$ haproxy_exporter_ip="<enter-ip-address-or-hostname>" $ cat > haproxy-scraper.yml <<CFGEOF --- global: scrape_interval: "60s" scrape_timeout: "10s" # external_labels: # source: openshift-router scrape_configs: - job_name: "haproxy" target_groups: - targets: - "${haproxy_exporter_ip}:9101" CFGEOF $ # And start prometheus as you would normally using the above config file. $ echo " - Example: prometheus -config.file=haproxy-scraper.yml " $ echo " or you can start it as a container on {product-title}!! $ echo " - Once the prometheus server is up, view the {product-title} HAProxy " $ echo " router metrics at: http://<ip>:9090/consoles/haproxy.html "
If you connect to the router while the proxy is reloading, there is a small
chance that your connection will end up in the wrong network queue and be
dropped. The issue is being addressed. In the meantime, it is possible to work
around the problem by installing iptables
rules to prevent connections during
the reload window. However, doing so means that the router needs to run with
elevated privilege so that it can manipulate iptables
on the host. It also
means that connections that happen during the reload are temporarily ignored and
must retransmit their connection start, lengthening the time it takes to
connect, but preventing connection failure.
To prevent this, configure the router to use iptables
by changing the service
account, and setting an environment variable on the router.
Use a Privileged SCC
When creating the router, allow it to use the privileged SCC. This gives the router user the ability to create containers with root privileges on the nodes:
$ oadm policy add-scc-to-user privileged -z router
Patch the Router Deployment Configuration to Create a Privileged Container
You can now create privileged containers. Next, configure the router deployment configuration to use the privilege so that the router can set the iptables rules it needs. This patch changes the router deployment configuration so that the container that is created runs as root:
$ oc patch dc router -p '{"spec":{"template":{"spec":{"containers":[{"name":"router","securityContext":{"privileged":true}}]}}}}'
Configure the Router to Use iptables
Set the option on the router deployment configuration:
$ oc set env dc/router -c router DROP_SYN_DURING_RESTART=1
If you used a non-default name for the router, you must change dc/router accordingly.
The HAProxy router is based on a golang template that generates the HAProxy configuration file from a list of routes. If you want a customized template router to meet your needs, you can customize the template file, build a new Docker image, and run a customized router. Alternatively you can use a ConfigMap.
One common case for this might be implementing new features within the application back ends. For example, it might be desirable in a highly-available setup to use stick-tables that synchronizes between peers. The router plug-in provides all the facilities necessary to make this customization.
You can obtain a new haproxy-config.template file from the latest router image by running:
# docker run --rm --interactive=true --tty --entrypoint=cat \ registry.access.redhat.com/openshift3/ose-haproxy-router:v3.0.2.0 haproxy-config.template
Save this content to a file for use as the basis of your customized template.
You can use ConfigMap to customize the router instance without rebuilding the router image. The haproxy-config.template, reload-haproxy, and other scripts can be modified as well as creating and modifying router environment variables.
Copy the haproxy-config.template that you want to modify as described above. Modify it as desired.
Create a ConfigMap:
$ oc create configmap customrouter --from-file=haproxy-config.template
The customrouter
ConfigMap now contains a copy of the modified
haproxy-config.template file.
Modify the router deployment configuration to mount the ConfigMap
as a file and point the TEMPLATE_FILE
environment variable to it.
This can be done via oc env
and oc volume
commands,
or alternatively by editing the router deployment configuration.
oc
commands$ oc env dc/router \
TEMPLATE_FILE=/var/lib/haproxy/conf/custom/haproxy-config.template
$ oc volume dc/router --add --overwrite \
--name=config-volume \
--mount-path=/var/lib/haproxy/conf/custom \
--source='{"configMap": { "name": "customrouter"}}'
Use oc edit dc router
to edit the router deployment configuration
with a text editor.
...
- name: STATS_USERNAME
value: admin
- name: TEMPLATE_FILE (1)
value: /var/lib/haproxy/conf/custom/haproxy-config.template
image: openshift/origin-haproxy-routerp
...
terminationMessagePath: /dev/termination-log
volumeMounts: (2)
- mountPath: /var/lib/haproxy/conf/custom
name: config-volume
dnsPolicy: ClusterFirst
...
terminationGracePeriodSeconds: 30
volumes: (3)
- configMap:
name: customrouter
name: config-volume
test: false
...
1 | In the spec.container.env field, add the TEMPLATE_FILE environment
variable to point to the mounted haproxy-config.template file. |
2 | Add the spec.container.volumeMounts field to create the mount point. |
3 | Add a new spec.volumes field to mention the ConfigMap. |
Save the changes and exit the editor. This restarts the router.
The following example customization can be used in a highly-available routing setup to use stick-tables that synchronize between peers.
Adding a Peer Section
In order to synchronize stick-tables amongst peers you must a define a peers
section in your HAProxy configuration. This section determines how HAProxy will
identify and connect to peers. The plug-in provides data to the template under
the .PeerEndpoints
variable to allow you to easily identify members of the
router service. You may add a peer section to the haproxy-config.template
file inside the router image by adding:
{{ if (len .PeerEndpoints) gt 0 }} peers openshift_peers {{ range $endpointID, $endpoint := .PeerEndpoints }} peer {{$endpoint.TargetName}} {{$endpoint.IP}}:1937 {{ end }} {{ end }}
Changing the Reload Script
When using stick-tables, you have the option of telling HAProxy what it should
consider the name of the local host in the peer section. When creating
endpoints, the plug-in attempts to set the TargetName
to the value of the
endpoint’s TargetRef.Name
. If TargetRef
is not set, it will set the
TargetName
to the IP address. The TargetRef.Name
corresponds with the
Kubernetes host name, therefore you can add the -L
option to the
reload-haproxy
script to identify the local host in the peer section.
peer_name=$HOSTNAME (1) if [ -n "$old_pid" ]; then /usr/sbin/haproxy -f $config_file -p $pid_file -L $peer_name -sf $old_pid else /usr/sbin/haproxy -f $config_file -p $pid_file -L $peer_name fi
1 | Must match an endpoint target name that is used in the peer section. |
Modifying Back Ends
Finally, to use the stick-tables within back ends, you can modify the HAProxy configuration to use the stick-tables and peer set. The following is an example of changing the existing back end for TCP connections to use stick-tables:
{{ if eq $cfg.TLSTermination "passthrough" }} backend be_tcp_{{$cfgIdx}} balance leastconn timeout check 5000ms stick-table type ip size 1m expire 5m{{ if (len $.PeerEndpoints) gt 0 }} peers openshift_peers {{ end }} stick on src {{ range $endpointID, $endpoint := $serviceUnit.EndpointTable }} server {{$endpointID}} {{$endpoint.IP}}:{{$endpoint.Port}} check inter 5000ms {{ end }} {{ end }}
After this modification, you can rebuild your router.
After you have made any desired modifications to the template, such as the example stick tables customization, you must rebuild your router for your changes to go in effect:
Rebuild the Docker image to include your customized template.
Create the router specifying your new image, either:
in the pod’s object definition directly, or
by adding the --images=<repo>/<image>:<tag>
flag to the oadm router
command when
creating
a highly-available routing service.
The F5 router plug-in is available starting in OpenShift Enterprise 3.0.2. |
The F5 router plug-in is provided as a Docker image and run as a pod, just like
the default HAProxy router. Deploying the F5 router is
done similarly as well, using the oadm router
command but providing additional
flags (or environment variables) to specify the following parameters for the F5
BIG-IP® host:
Flag | Description |
---|---|
|
Specifies that an F5 router should be launched (the default |
|
Specifies the F5 BIG-IP® host’s management interface’s host name or IP address. |
|
Specifies the F5 BIG-IP® user name (typically admin). |
|
Specifies the F5 BIG-IP® password. |
|
Specifies the name of the F5 virtual server for HTTP connections. |
|
Specifies the name of the F5 virtual server for HTTPS connections. |
|
Specifies the path to the SSH private key file for the F5 BIG-IP® host. Required to upload and delete key and certificate files for routes. |
|
A Boolean flag that indicates that the F5 router should skip strict certificate verification with the F5 BIG-IP® host. |
As with the HAProxy router, the oadm router
command creates the service and
deployment configuration objects, and thus the replication controllers and
pod(s) in which the F5 router itself runs. The replication controller restarts
the F5 router in case of crashes. Because the F5 router is only watching routes
and endpoints and configuring F5 BIG-IP® accordingly, running the F5 router in
this way along with an appropriately configured F5 BIG-IP® deployment should
satisfy high-availability requirements.
The F5 router will also need to be run in privileged mode because route
certificates get copied using scp
:
$ oadm policy remove-scc-from-user hostnetwork -z router $ oadm policy add-scc-to-user privileged -z router
To deploy the F5 router:
First, establish a tunnel using a ramp node, which allows for the routing of traffic to pods through the OpenShift Enterprise SDN.
Run the oadm router
command with the appropriate
flags. For example:
$ oadm router \ --type=f5-router \ --external-host=10.0.0.2 \ --external-host-username=admin \ --external-host-password=mypassword \ --external-host-http-vserver=ose-vserver \ --external-host-https-vserver=https-ose-vserver \ --external-host-private-key=/path/to/key \ --credentials='/etc/origin/master/openshift-router.kubeconfig' \(1) --service-account=router
1 | --credentials is the path to the
CLI configuration file
for the openshift-router. It is recommended using an openshift-router
specific profile with appropriate permissions. |
If you deployed an HAProxy router, you can learn more about monitoring the router.
If you have not yet done so, you can:
Configure authentication; by default, authentication is set to Deny All.
Deploy an integrated Docker registry.