Knative Serving on OpenShift Container Platform builds on Kubernetes and Istio to support deploying and serving serverless applications.
It creates a set of Kubernetes Custom Resource Definitions (CRDs) that are used to define and control the behavior of serverless workloads on an OpenShift Container Platform cluster.
These CRDs are building blocks to address complex use cases, for example: * Rapidly deploying serverless containers. * Automatically scaling pods. * Viewing point-in-time snapshots of deployed code and configurations.
The components described in this section are the resources that Knative Serving requires to be configured and run correctly.
service.serving.knative.dev resource automatically manages the whole lifecycle of a serverless workload on a cluster. It controls the creation of other objects to ensure that an app has a route, a configuration, and a new revision for each update of the service. Services can be defined to always route traffic to the latest revision or to a pinned revision.
route.serving.knative.dev resource maps a network endpoint to one or more Knative revisions. You can manage the traffic in several ways, including fractional traffic and named routes.
configuration.serving.knative.dev resource maintains the required state for your deployment. Modifying a configuration creates a new revision.
revision.serving.knative.dev resource is a point-in-time snapshot of the code and configuration for each modification made to the workload. Revisions are immutable objects and can be retained for as long as needed.
Cluster administrators can modify the
revision.serving.knative.dev resource to enable automatic scaling of Pods in your OpenShift Container Platform cluster.