$ oc adm policy add-cluster-role-to-user cluster-admin username
OpenShift Container Platform provides methods for communicating from outside the cluster with services running in the cluster. This method uses an Ingress Controller.
The Ingress Operator manages Ingress Controllers and wildcard DNS.
Using an Ingress Controller is the most common way to allow external access to an OpenShift Container Platform cluster.
An Ingress Controller is configured to accept external requests and proxy them based on the configured routes. This is limited to HTTP, HTTPS using SNI, and TLS using SNI, which is sufficient for web applications and services that work over TLS with SNI.
Work with your administrator to configure an Ingress Controller to accept external requests and proxy them based on the configured routes.
The administrator can create a wildcard DNS entry and then set up an Ingress Controller. Then, you can work with the edge Ingress Controller without having to contact the administrators.
By default, every Ingress Controller in the cluster can admit any route created in any project in the cluster.
The Ingress Controller:
Has two replicas by default, which means it should be running on two worker nodes.
Can be scaled up to have more replicas on more nodes.
The procedures in this section require prerequisites performed by the cluster administrator. |
Before starting the following procedures, the administrator must:
Set up the external port to the cluster networking environment so that requests can reach the cluster.
Make sure there is at least one user with cluster admin role. To add this role to a user, run the following command:
$ oc adm policy add-cluster-role-to-user cluster-admin username
Have an OpenShift Container Platform cluster with at least one master and at least one node and a system outside the cluster that has network access to the cluster. This procedure assumes that the external system is on the same subnet as the cluster. The additional networking required for external systems on a different subnet is out-of-scope for this topic.
If the project and service that you want to expose do not exist, first create the project, then the service.
If the project and service already exist, skip to the procedure on exposing the service to create a route.
Install the oc
CLI and log in as a cluster administrator.
Create a new project for your service by running the oc new-project
command:
$ oc new-project myproject
Use the oc new-app
command to create your service:
$ oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git
To verify that the service was created, run the following command:
$ oc get svc -n myproject
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nodejs-ex ClusterIP 172.30.197.157 <none> 8080/TCP 70s
By default, the new service does not have an external IP address.
You can expose the service as a route by using the oc expose
command.
To expose the service:
Log in to OpenShift Container Platform.
Log in to the project where the service you want to expose is located:
$ oc project myproject
Run the oc expose service
command to expose the route:
$ oc expose service nodejs-ex
route.route.openshift.io/nodejs-ex exposed
To verify that the service is exposed, you can use a tool, such as cURL, to make sure the service is accessible from outside the cluster.
Use the oc get route
command to find the route’s host name:
$ oc get route
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
nodejs-ex nodejs-ex-myproject.example.com nodejs-ex 8080-tcp None
Use cURL to check that the host responds to a GET request:
$ curl --head nodejs-ex-myproject.example.com
HTTP/1.1 200 OK
...
In OpenShift Container Platform, an Ingress Controller can serve all routes, or it can serve a subset of routes. By default, the Ingress Controller serves any route created in any namespace in the cluster. You can add additional Ingress Controllers to your cluster to optimize routing by creating shards, which are subsets of routes based on selected characteristics. To mark a route as a member of a shard, use labels in the route or namespace metadata
field. The Ingress Controller uses selectors, also known as a selection expression, to select a subset of routes from the entire pool of routes to serve.
Ingress sharding is useful in cases where you want to load balance incoming traffic across multiple Ingress Controllers, when you want to isolate traffic to be routed to a specific Ingress Controller, or for a variety of other reasons described in the next section.
By default, each route uses the default domain of the cluster. However, routes can be configured to use the domain of the router instead.
You can use Ingress sharding, also known as router sharding, to distribute a set of routes across multiple routers by adding labels to routes, namespaces, or both. The Ingress Controller uses a corresponding set of selectors to admit only the routes that have a specified label. Each Ingress shard comprises the routes that are filtered using a given selection expression.
As the primary mechanism for traffic to enter the cluster, the demands on the Ingress Controller can be significant. As a cluster administrator, you can shard the routes to:
Balance Ingress Controllers, or routers, with several routes to speed up responses to changes.
Allocate certain routes to have different reliability guarantees than other routes.
Allow certain Ingress Controllers to have different policies defined.
Allow only specific routes to use additional features.
Expose different routes on different addresses so that internal and external users can see different routes, for example.
Transfer traffic from one version of an application to another during a blue green deployment.
When Ingress Controllers are sharded, a given route is admitted to zero or more Ingress Controllers in the group. A route’s status describes whether an Ingress Controller has admitted it or not. An Ingress Controller will only admit a route if it is unique to its shard.
An Ingress Controller can use three sharding methods:
Adding only a namespace selector to the Ingress Controller, so that all routes in a namespace with labels that match the namespace selector are in the Ingress shard.
Adding only a route selector to the Ingress Controller, so that all routes with labels that match the route selector are in the Ingress shard.
Adding both a namespace selector and route selector to the Ingress Controller, so that routes with labels that match the route selector in a namespace with labels that match the namespace selector are in the Ingress shard.
With sharding, you can distribute subsets of routes over multiple Ingress Controllers. These subsets can be non-overlapping, also called traditional sharding, or overlapping, otherwise known as overlapped sharding.
An example of a configured Ingress Controller finops-router
that has the label selector spec.namespaceSelector.matchExpressions
with key values set to finance
and ops
:
finops-router
apiVersion: operator.openshift.io/v1
kind: IngressController
metadata:
name: finops-router
namespace: openshift-ingress-operator
spec:
namespaceSelector:
matchExpressions:
- key: name
operator: In
values:
- finance
- ops
An example of a configured Ingress Controller dev-router
that has the label selector spec.namespaceSelector.matchLabels.name
with the key value set to dev
:
dev-router
apiVersion: operator.openshift.io/v1
kind: IngressController
metadata:
name: dev-router
namespace: openshift-ingress-operator
spec:
namespaceSelector:
matchLabels:
name: dev
If all application routes are in separate namespaces, such as each labeled with name:finance
, name:ops
, and name:dev
, the configuration effectively distributes your routes between the two Ingress Controllers. OpenShift Container Platform routes for console, authentication, and other purposes should not be handled.
In the previous scenario, sharding becomes a special case of partitioning, with no overlapping subsets. Routes are divided between router shards.
The |
An example of a configured Ingress Controller devops-router
that has the label selector spec.namespaceSelector.matchExpressions
with key values set to dev
and ops
:
devops-router
apiVersion: operator.openshift.io/v1
kind: IngressController
metadata:
name: devops-router
namespace: openshift-ingress-operator
spec:
namespaceSelector:
matchExpressions:
- key: name
operator: In
values:
- dev
- ops
The routes in the namespaces labeled name:dev
and name:ops
are now serviced by two different Ingress Controllers. With this configuration, you have overlapping subsets of routes.
With overlapping subsets of routes you can create more complex routing rules. For example, you can divert higher priority traffic to the dedicated finops-router
while sending lower priority traffic to devops-router
.
After creating a new Ingress shard, there might be routes that are admitted to your new Ingress shard that are also admitted by the default Ingress Controller. This is because the default Ingress Controller has no selectors and admits all routes by default.
You can restrict an Ingress Controller from servicing routes with specific labels using either namespace selectors or route selectors. The following procedure restricts the default Ingress Controller from servicing your newly sharded finance
, ops
, and dev
, routes using a namespace selector. This adds further isolation to Ingress shards.
You must keep all of OpenShift Container Platform’s administration routes on the same Ingress Controller. Therefore, avoid adding additional selectors to the default Ingress Controller that exclude these essential routes. |
You installed the OpenShift CLI (oc
).
You are logged in as a project administrator.
Modify the default Ingress Controller by running the following command:
$ oc edit ingresscontroller -n openshift-ingress-operator default
Edit the Ingress Controller to contain a namespaceSelector
that excludes the routes with any of the finance
, ops
, and dev
labels:
apiVersion: operator.openshift.io/v1
kind: IngressController
metadata:
name: default
namespace: openshift-ingress-operator
spec:
namespaceSelector:
matchExpressions:
- key: type
operator: NotIn
values:
- finance
- ops
- dev
The default Ingress Controller will no longer serve the namespaces labeled name:finance
, name:ops
, and name:dev
.
The cluster administrator is responsible for making a separate DNS entry for each router in a project. A router will not forward unknown routes to another router.
Consider the following example:
Router A lives on host 192.168.0.5 and has routes with *.foo.com
.
Router B lives on host 192.168.1.9 and has routes with *.example.com
.
Separate DNS entries must resolve *.foo.com
to the node hosting Router A and *.example.com
to the node hosting Router B:
*.foo.com A IN 192.168.0.5
*.example.com A IN 192.168.1.9
Ingress Controller sharding by using route labels means that the Ingress Controller serves any route in any namespace that is selected by the route selector.
Ingress Controller sharding is useful when balancing incoming traffic load among a set of Ingress Controllers and when isolating traffic to a specific Ingress Controller. For example, company A goes to one Ingress Controller and company B to another.
Edit the router-internal.yaml
file:
apiVersion: operator.openshift.io/v1
kind: IngressController
metadata:
name: sharded
namespace: openshift-ingress-operator
spec:
domain: <apps-sharded.basedomain.example.net> (1)
nodePlacement:
nodeSelector:
matchLabels:
node-role.kubernetes.io/worker: ""
routeSelector:
matchLabels:
type: sharded
1 | Specify a domain to be used by the Ingress Controller. This domain must be different from the default Ingress Controller domain. |
Apply the Ingress Controller router-internal.yaml
file:
# oc apply -f router-internal.yaml
The Ingress Controller selects routes in any namespace that have the label
type: sharded
.
Create a new route using the domain configured in the router-internal.yaml
:
$ oc expose svc <service-name> --hostname <route-name>.apps-sharded.basedomain.example.net
Ingress Controller sharding by using namespace labels means that the Ingress Controller serves any route in any namespace that is selected by the namespace selector.
Ingress Controller sharding is useful when balancing incoming traffic load among a set of Ingress Controllers and when isolating traffic to a specific Ingress Controller. For example, company A goes to one Ingress Controller and company B to another.
Edit the router-internal.yaml
file:
$ cat router-internal.yaml
apiVersion: operator.openshift.io/v1
kind: IngressController
metadata:
name: sharded
namespace: openshift-ingress-operator
spec:
domain: <apps-sharded.basedomain.example.net> (1)
nodePlacement:
nodeSelector:
matchLabels:
node-role.kubernetes.io/worker: ""
namespaceSelector:
matchLabels:
type: sharded
1 | Specify a domain to be used by the Ingress Controller. This domain must be different from the default Ingress Controller domain. |
Apply the Ingress Controller router-internal.yaml
file:
$ oc apply -f router-internal.yaml
The Ingress Controller selects routes in any namespace that is selected by the
namespace selector that have the label type: sharded
.
Create a new route using the domain configured in the router-internal.yaml
:
$ oc expose svc <service-name> --hostname <route-name>.apps-sharded.basedomain.example.net
A route allows you to host your application at a URL. In this case, the hostname is not set and the route uses a subdomain instead. When you specify a subdomain, you automatically use the domain of the Ingress Controller that exposes the route. For situations where a route is exposed by multiple Ingress Controllers, the route is hosted at multiple URLs.
The following procedure describes how to create a route for Ingress Controller sharding, using the hello-openshift
application as an example.
Ingress Controller sharding is useful when balancing incoming traffic load among a set of Ingress Controllers and when isolating traffic to a specific Ingress Controller. For example, company A goes to one Ingress Controller and company B to another.
You installed the OpenShift CLI (oc
).
You are logged in as a project administrator.
You have a web application that exposes a port and an HTTP or TLS endpoint listening for traffic on the port.
You have configured the Ingress Controller for sharding.
Create a project called hello-openshift
by running the following command:
$ oc new-project hello-openshift
Create a pod in the project by running the following command:
$ oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json
Create a service called hello-openshift
by running the following command:
$ oc expose pod/hello-openshift
Create a route definition called hello-openshift-route.yaml
:
apiVersion: route.openshift.io/v1
kind: Route
metadata:
labels:
type: sharded (1)
name: hello-openshift-edge
namespace: hello-openshift
spec:
subdomain: hello-openshift (2)
tls:
termination: edge
to:
kind: Service
name: hello-openshift
1 | Both the label key and its corresponding label value must match the ones specified in the Ingress Controller. In this example, the Ingress Controller has the label key and value type: sharded . |
2 | The route will be exposed using the value of the subdomain field. When you specify the subdomain field, you must leave the hostname unset. If you specify both the host and subdomain fields, then the route will use the value of the host field, and ignore the subdomain field. |
Use hello-openshift-route.yaml
to create a route to the hello-openshift
application by running the following command:
$ oc -n hello-openshift create -f hello-openshift-route.yaml
Get the status of the route with the following command:
$ oc -n hello-openshift get routes/hello-openshift-edge -o yaml
The resulting Route
resource should look similar to the following:
apiVersion: route.openshift.io/v1
kind: Route
metadata:
labels:
type: sharded
name: hello-openshift-edge
namespace: hello-openshift
spec:
subdomain: hello-openshift
tls:
termination: edge
to:
kind: Service
name: hello-openshift
status:
ingress:
- host: hello-openshift.<apps-sharded.basedomain.example.net> (1)
routerCanonicalHostname: router-sharded.<apps-sharded.basedomain.example.net> (2)
routerName: sharded (3)
1 | The hostname the Ingress Controller, or router, uses to expose the route. The value of the host field is automatically determined by the Ingress Controller, and uses its domain. In this example, the domain of the Ingress Controller is <apps-sharded.basedomain.example.net> . |
2 | The hostname of the Ingress Controller. |
3 | The name of the Ingress Controller. In this example, the Ingress Controller has the name sharded . |
NodePortService
endpoint publishing strategy
The NodePortService
endpoint publishing strategy publishes the Ingress Controller using a Kubernetes NodePort service.
In this configuration, the Ingress Controller deployment uses container networking. A NodePortService
is created to publish the deployment. The specific node ports are dynamically allocated by OpenShift Container Platform; however, to support static port allocations, your changes to the node port field of the managed NodePortService
are preserved.
The preceding graphic shows the following concepts pertaining to OpenShift Container Platform Ingress NodePort endpoint publishing strategy:
All the available nodes in the cluster have their own, externally accessible IP addresses. The service running in the cluster is bound to the unique NodePort for all the nodes.
When the client connects to a node that is down, for example, by connecting the 10.0.128.4
IP address in the graphic, the node port directly connects the client to an available node that is running the service. In this scenario, no load balancing is required. As the image shows, the 10.0.128.4
address is down and another IP address must be used instead.
The Ingress Operator ignores any updates to By default, ports are allocated automatically and you can access the port allocations for integrations. However, sometimes static port allocations are necessary to integrate with existing infrastructure which may not be easily reconfigured in response to dynamic ports. To achieve integrations with static node ports, you can update the managed service resource directly. |
For more information, see the Kubernetes Services documentation on NodePort
.
HostNetwork
endpoint publishing strategy
The HostNetwork
endpoint publishing strategy publishes the Ingress Controller on node ports where the Ingress Controller is deployed.
An Ingress Controller with the HostNetwork
endpoint publishing strategy can have only one pod replica per node. If you want n replicas, you must use at least n nodes where those replicas can be scheduled. Because each pod replica requests ports 80
and 443
on the node host where it is scheduled, a replica cannot be scheduled to a node if another pod on the same node is using those ports.
The HostNetwork
object has a hostNetwork
field with the following default values for the optional binding ports: httpPort: 80
, httpsPort: 443
, and statsPort: 1936
. By specifying different binding ports for your network, you can deploy multiple Ingress Controllers on the same node for the HostNetwork
strategy.
apiVersion: operator.openshift.io/v1
kind: IngressController
metadata:
name: internal
namespace: openshift-ingress-operator
spec:
domain: example.com
endpointPublishingStrategy:
type: HostNetwork
hostNetwork:
httpPort: 80
httpsPort: 443
statsPort: 1936
When a cluster administrator installs a new cluster without specifying that the cluster is private, the default Ingress Controller is created with a scope
set to External
. Cluster administrators can change an External
scoped Ingress Controller to Internal
.
You installed the oc
CLI.
To change an External
scoped Ingress Controller to Internal
, enter the following command:
$ oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{"spec":{"endpointPublishingStrategy":{"type":"LoadBalancerService","loadBalancer":{"scope":"Internal"}}}}'
To check the status of the Ingress Controller, enter the following command:
$ oc -n openshift-ingress-operator get ingresscontrollers/default -o yaml
The Progressing
status condition indicates whether you must take further action. For example, the status condition can indicate that you need to delete the service by entering the following command:
$ oc -n openshift-ingress delete services/router-default
If you delete the service, the Ingress Operator recreates it as Internal
.
When a cluster administrator installs a new cluster without specifying that the cluster is private, the default Ingress Controller is created with a scope
set to External
.
The Ingress Controller’s scope can be configured to be Internal
during installation or after, and cluster administrators can change an Internal
Ingress Controller to External
.
On some platforms, it is necessary to delete and recreate the service. Changing the scope can cause disruption to Ingress traffic, potentially for several minutes. This applies to platforms where it is necessary to delete and recreate the service, because the procedure can cause OpenShift Container Platform to deprovision the existing service load balancer, provision a new one, and update DNS. |
You installed the oc
CLI.
To change an Internal
scoped Ingress Controller to External
, enter the following command:
$ oc -n openshift-ingress-operator patch ingresscontrollers/private --type=merge --patch='{"spec":{"endpointPublishingStrategy":{"type":"LoadBalancerService","loadBalancer":{"scope":"External"}}}}'
To check the status of the Ingress Controller, enter the following command:
$ oc -n openshift-ingress-operator get ingresscontrollers/default -o yaml
The Progressing
status condition indicates whether you must take further action. For example, the status condition can indicate that you need to delete the service by entering the following command:
$ oc -n openshift-ingress delete services/router-default
If you delete the service, the Ingress Operator recreates it as External
.