$ oadm router --dry-run --service-account=router \ --credentials='/etc/origin/master/openshift-router.kubeconfig' (1)
The OpenShift router is the ingress point for all external traffic destined for services in your OpenShift installation. OpenShift provides and supports the following two router plug-ins:
The HAProxy template router is the default plug-in. It uses the openshift3/ose-haproxy-router image to run an HAProxy instance alongside the template router plug-in inside a container on OpenShift. It currently supports HTTP(S) traffic and TLS-enabled traffic via SNI. The router’s container listens on the host network interface, unlike most containers that listen only on private IPs. The router proxies external requests for route names to the IPs of actual pods identified by the service associated with the route.
The F5 router integrates with an existing F5 BIG-IP® system in your environment to synchronize routes. F5 BIG-IP® version 11.4 or newer is required in order to have the F5 iControl REST API.
The F5 router plug-in is available starting in OpenShift Enterprise 3.0.2. |
Before deploying an OpenShift cluster, you must have a service account for the router. Starting in OpenShift Enterprise 3.1, a router service account is automatically created during a quick or advanced installation (previously, this required manual creation). This service account has permissions to a security context constraint (SCC) that allows it to specify host ports.
The oadm router
command is provided with the administrator CLI to simplify the
tasks of setting up routers in a new installation. Just about every form of
communication between OpenShift components is secured by TLS and uses various
certificates and authentication methods. Use the --credentials
option to
specify what credentials the router should use to contact the master.
Routers directly attach to port 80 and 443 on all interfaces on a host. Restrict routers to hosts where port 80/443 is available and not being consumed by another service, and set this using node selectors and the scheduler configuration. As an example, you can achieve this by dedicating infrastructure nodes to run services such as routers. |
It is recommended to use separate distinct openshift-router credentials
with your router. The credentials can be provided using the $ oadm router --dry-run --service-account=router \ --credentials='/etc/origin/master/openshift-router.kubeconfig' (1)
|
Router pods created using |
The default router service account, named router, is automatically created during quick and advanced installations. To verify that this account already exists:
$ oadm router --dry-run \ --credentials='/etc/origin/master/openshift-router.kubeconfig' \ --service-account=router
To see what the default router would look like if created:
$ oadm router -o yaml \ --credentials='/etc/origin/master/openshift-router.kubeconfig' \ --service-account=router
To create a router if it does not exist:
$ oadm router <router_name> --replicas=<number> \ --credentials='/etc/origin/master/openshift-router.kubeconfig' \ --service-account=router
Multiple instances are created on different hosts according to the scheduler policy.
To use a different router image and view the router configuration that would be used:
$ oadm router <router_name> -o <format> --images=<image> \ --credentials='/etc/origin/master/openshift-router.kubeconfig' \ --service-account=router
For example:
$ oadm router region-west -o yaml --images=myrepo/somerouter:mytag \ --credentials='/etc/origin/master/openshift-router.kubeconfig' \ --service-account=router
To deploy the router to any node(s) that match a specified node label:
$ oadm router <router_name> --replicas=<number> --selector=<label> \ --service-account=router
For example, if you want to create a router named router
and have it placed on
a node labeled with region=infra
:
$ oadm router router --replicas=1 --selector='region=infra' \ --service-account=router
You can set up a highly-available router on your OpenShift Enterprise cluster using IP failover.
You can customize the suffix used as the default routing subdomain for your environment using the master configuration file (the /etc/origin/master/master-config.yaml file by default). The following example shows how you can set the configured suffix to v3.openshift.test:
routingConfig: subdomain: v3.openshift.test
This change requires a restart of the master if it is running. |
With the OpenShift master(s) running the above configuration, the generated host name for the example of a host added to a namespace mynamespace would be:
myroute-mynamespace.v3.openshift.test
A TLS-enabled route that does not include a certificate uses the router’s default certificate instead. In most cases, this certificate should be provided by a trusted certificate authority, but for convenience you can use the OpenShift CA to create the certificate. For example:
$ CA=/etc/origin/master $ oadm ca create-server-cert --signer-cert=$CA/ca.crt \ --signer-key=$CA/ca.key --signer-serial=$CA/ca.serial.txt \ --hostnames='*.cloudapps.example.com' \ --cert=cloudapps.crt --key=cloudapps.key
The router expects the certificate and key to be in PEM format in a single file:
$ cat cloudapps.crt cloudapps.key $CA/ca.crt > cloudapps.router.pem
From there you can use the --default-cert
flag:
$ oadm router --default-cert=cloudapps.router.pem --service-account=router \ --credentials=${ROUTER_KUBECONFIG:-"$KUBECONFIG"}
Browsers only consider wildcards valid for subdomains one level deep. So in this example, the certificate would be valid for a.cloudapps.example.com but not for a.b.cloudapps.example.com. |
Currently, password protected key files are not supported. HAProxy prompts for a password upon starting and does not have a way to automate this process. To remove a passphrase from a keyfile, you can run:
# openssl rsa -in <passwordProtectedKey.key> -out <new.key>
Here is an example of how to use a secure edge terminated route with TLS termination occurring on the router before traffic is proxied to the destination. The secure edge terminated route specifies the TLS certificate and key information. The TLS certificate is served by the router front end.
First, start up a router instance:
# oadm router --replicas=1 --service-account=router \ --credentials=${ROUTER_KUBECONFIG:-"$KUBECONFIG"}
Next, create a private key, csr and certificate for our edge secured route.
The instructions on how to do that would be specific to your certificate
authority and provider. For a simple self-signed certificate for a domain
named www.example.test
, see the example shown below:
# sudo openssl genrsa -out example-test.key 2048 # # sudo openssl req -new -key example-test.key -out example-test.csr \ -subj "/C=US/ST=CA/L=Mountain View/O=OS3/OU=Eng/CN=www.example.test" # # sudo openssl x509 -req -days 366 -in example-test.csr \ -signkey example-test.key -out example-test.crt
Generate a route configuration file using the above certificate and key.
Make sure to replace servicename my-service
with the name of your service.
# servicename="my-service" # echo " apiVersion: v1 kind: Route metadata: name: secured-edge-route spec: host: www.example.test to: kind: Service name: $servicename tls: termination: edge key: | $(openssl rsa -in example-test.key | sed 's/^/ /') certificate: | $(openssl x509 -in example-test.crt | sed 's/^/ /') " > example-test-route.yaml
Finally add the route to OpenShift (and the router) via:
# oc create -f example-test-route.yaml
Make sure your DNS entry for www.example.test
points to your router
instance(s) and the route to your domain should be available.
The example below uses curl along with a local resolver to simulate the
DNS lookup:
# routerip="4.1.1.1" # replace with IP address of one of your router instances. # curl -k --resolve www.example.test:443:$routerip https://www.example.test/
The OpenShift router runs inside a Docker container and the default behavior is to use the network stack of the host (i.e., the node where the router container runs). This default behavior benefits performance because network traffic from remote clients does not need to take multiple hops through user space to reach the target service and container.
Additionally, this default behavior enables the router to get the actual source IP address of the remote connection rather than getting the node’s IP address. This is useful for defining ingress rules based on the originating IP, supporting sticky sessions, and monitoring traffic, among other uses.
This host network behavior is controlled by the --host-network
router command
line option, and the default behaviour is the equivalent of using
--host-network=true
. If you wish to run the router with the container network
stack, use the --host-network=false
option when creating the router. For
example:
$ oadm router \ --credentials='/etc/origin/master/openshift-router.kubeconfig' \ --service-account=router \ --host-network=false
Internally, this means the router container must publish the 80 and 443 ports in order for the external network to communicate with the router.
Running with the container network stack means that the router sees the source IP address of a connection to be the NATed IP address of the node, rather than the actual remote IP address. |
On OpenShift clusters using
multi-tenant
network isolation, routers on a non-default namespace with the
|
Using the --metrics-image
and --expose-metrics
options, you can configure
the OpenShift Enterprise router to run a sidecar container that exposes or publishes
router metrics for consumption by external metrics collection and aggregation
systems (e.g. Prometheus, statsd).
Depending on your router implementation, the image is appropriately set up and
the metrics sidecar container is started when the router is deployed. For
example, the HAProxy-based router implementation defaults to using the
prom/haproxy-exporter
image to run as a sidecar container, which can then be
used as a metrics datasource by the Prometheus server.
The |
Grab the HAProxy Prometheus exporter image from the Docker registry:
$ sudo docker pull prom/haproxy-exporter
Create the OpenShift Enterprise router:
$ oadm router \ --credentials='/etc/origin/master/openshift-router.kubeconfig' \ --service-account=router --expose-metrics
Or, optionally, use the --metrics-image
option to override the HAProxy
defaults:
$ oadm router \ --credentials='/etc/origin/master/openshift-router.kubeconfig' \ --service-account=router --expose-metrics \ --metrics-image=prom/haproxy-exporter
Once the haproxy-exporter containers (and your HAProxy router) have started, point Prometheus to the sidecar container on port 9101 on the node where the haproxy-exporter container is running:
$ haproxy_exporter_ip="<enter-ip-address-or-hostname>" $ cat > haproxy-scraper.yml <<CFGEOF --- global: scrape_interval: "60s" scrape_timeout: "10s" # external_labels: # source: openshift-router scrape_configs: - job_name: "haproxy" target_groups: - targets: - "${haproxy_exporter_ip}:9101" CFGEOF $ # And start prometheus as you would normally using the above config file. $ echo " - Example: prometheus -config.file=haproxy-scraper.yml " $ echo " or you can start it as a container on OpenShift!! $ echo " - Once the prometheus server is up, view the OpenShift HAProxy " $ echo " router metrics at: http://<ip>:9090/consoles/haproxy.html "
The HAProxy router is based on a golang template that generates the HAProxy configuration file from a list of routes. If you want a customized template router to meet your needs, you can customize the template file, build a new Docker image, and run a customized router.
One common case for this might be implementing new features within the application back ends. For example, it might be desirable in a highly-available setup to use stick-tables that synchronizes between peers. The router plug-in provides all the facilities necessary to make this customization.
You can obtain a new haproxy-config.template file from the latest router image by running:
# docker run --rm --interactive=true --tty --entrypoint=cat \ registry.access.redhat.com/openshift3/ose-haproxy-router:v3.0.2.0 haproxy-config.template
Save this content to a file for use as the basis of your customized template.
The following example customization can be used in a highly-available routing setup to use stick-tables that synchronize between peers.
Adding a Peer Section
In order to synchronize stick-tables amongst peers you must a define a peers
section in your HAProxy configuration. This section determines how HAProxy will
identify and connect to peers. The plug-in provides data to the template under
the .PeerEndpoints
variable to allow you to easily identify members of the
router service. You may add a peer section to the haproxy-config.template
file inside the router image by adding:
{{ if (len .PeerEndpoints) gt 0 }} peers openshift_peers {{ range $endpointID, $endpoint := .PeerEndpoints }} peer {{$endpoint.TargetName}} {{$endpoint.IP}}:1937 {{ end }} {{ end }}
Changing the Reload Script
When using stick-tables, you have the option of telling HAProxy what it should
consider the name of the local host in the peer section. When creating
endpoints, the plug-in attempts to set the TargetName
to the value of the
endpoint’s TargetRef.Name
. If TargetRef
is not set, it will set the
TargetName
to the IP address. The TargetRef.Name
corresponds with the
Kubernetes host name, therefore you can add the -L
option to the
reload-haproxy
script to identify the local host in the peer section.
peer_name=$HOSTNAME (1) if [ -n "$old_pid" ]; then /usr/sbin/haproxy -f $config_file -p $pid_file -L $peer_name -sf $old_pid else /usr/sbin/haproxy -f $config_file -p $pid_file -L $peer_name fi
1 | Must match an endpoint target name that is used in the peer section. |
Modifying Back Ends
Finally, to use the stick-tables within back ends, you can modify the HAProxy configuration to use the stick-tables and peer set. The following is an example of changing the existing back end for TCP connections to use stick-tables:
{{ if eq $cfg.TLSTermination "passthrough" }} backend be_tcp_{{$cfgIdx}} balance leastconn timeout check 5000ms stick-table type ip size 1m expire 5m{{ if (len $.PeerEndpoints) gt 0 }} peers openshift_peers {{ end }} stick on src {{ range $endpointID, $endpoint := $serviceUnit.EndpointTable }} server {{$endpointID}} {{$endpoint.IP}}:{{$endpoint.Port}} check inter 5000ms {{ end }} {{ end }}
After this modification, you can rebuild your router.
After you have made any desired modifications to the template, such as the example stick tables customization, you must rebuild your router for your changes to go in effect:
Rebuild the Docker image to include your customized template.
Create the router specifying your new image, either:
in the pod’s object definition directly, or
by adding the --images=<repo>/<image>:<tag>
flag to the oadm router
command when
creating
a highly-available routing service.
The F5 router plug-in is available starting in OpenShift Enterprise 3.0.2. |
The F5 router plug-in is provided as a Docker image and run as a pod, just like
the default HAProxy router. Deploying the F5 router is
done similarly as well, using the oadm router
command but providing additional
flags (or environment variables) to specify the following parameters for the F5
BIG-IP® host:
Flag | Description |
---|---|
|
Specifies that an F5 router should be launched (the default |
|
Specifies the F5 BIG-IP® host’s management interface’s host name or IP address. |
|
Specifies the F5 BIG-IP® user name (typically admin). |
|
Specifies the F5 BIG-IP® password. |
|
Specifies the name of the F5 virtual server for HTTP connections. |
|
Specifies the name of the F5 virtual server for HTTPS connections. |
|
Specifies the path to the SSH private key file for the F5 BIG-IP® host. Required to upload and delete key and certificate files for routes. |
|
A Boolean flag that indicates that the F5 router should skip strict certificate verification with the F5 BIG-IP® host. |
As with the HAProxy router, the oadm router
command creates the service and
deployment configuration objects, and thus the replication controllers and
pod(s) in which the F5 router itself runs. The replication controller restarts
the F5 router in case of crashes. Because the F5 router is only watching routes
and endpoints and configuring F5 BIG-IP® accordingly, running the F5 router in
this way along with an appropriately configured F5 BIG-IP® deployment should
satisfy high-availability requirements.
To deploy the F5 router:
First, establish a tunnel using a ramp node, which allows for the routing of traffic to pods through the OpenShift SDN.
Run the oadm router
command with the appropriate
flags. For example:
$ oadm router \ --type=f5-router \ --external-host=10.0.0.2 \ --external-host-username=admin \ --external-host-password=mypassword \ --external-host-http-vserver=ose-vserver \ --external-host-https-vserver=https-ose-vserver \ --external-host-private-key=/path/to/key \ --credentials='/etc/origin/master/openshift-router.kubeconfig' \(1) --service-account=router
1 | --credentials is the path to the
CLI configuration file
for the openshift-router. It is recommended using an openshift-router
specific profile with appropriate permissions. |
If you deployed an HAProxy router, you can learn more about monitoring the router.
If you have not yet done so, you can:
Configure authentication; by default, authentication is set to Deny All.
Deploy an integrated Docker registry.