×

The OpenShift Serverless Operator manages the global configuration of a Knative installation, including propagating values from the KnativeServing and KnativeEventing custom resources to system config maps. Any updates to config maps which are applied manually are overwritten by the Operator. However, modifying the Knative custom resources allows you to set values for these config maps.

Knative has multiple config maps that are named with the prefix config-. All Knative config maps are created in the same namespace as the custom resource that they apply to. For example, if the KnativeServing custom resource is created in the knative-serving namespace, all Knative Serving config maps are also created in this namespace.

The spec.config in the Knative custom resources have one <name> entry for each config map, named config-<name>, with a value which is be used for the config map data.

Configuring the default channel implementation

The default-ch-webhook config map can be used to specify the default channel implementation of Knative Eventing. The default channel implementation can be specified for the entire cluster, as well as for one or more namespaces. Currently the InMemoryChannel and KafkaChannel channel types are supported.

Prerequisites
  • You have cluster or dedicated administrator permissions on OpenShift Dedicated.

  • You have installed the OpenShift Serverless Operator and Knative Eventing on your cluster.

  • If you want to use Kafka channels as the default channel implementation, you must also install the KnativeKafka CR on your cluster.

Procedure
  • Modify the KnativeEventing custom resource to add configuration details for the default-ch-webhook config map:

    apiVersion: operator.knative.dev/v1alpha1
    kind: KnativeEventing
    metadata:
      name: knative-eventing
      namespace: knative-eventing
    spec:
      config: (1)
        default-ch-webhook: (2)
          default-ch-config: |
            clusterDefault: (3)
              apiVersion: messaging.knative.dev/v1
              kind: InMemoryChannel
              spec:
                delivery:
                  backoffDelay: PT0.5S
                  backoffPolicy: exponential
                  retry: 5
            namespaceDefaults: (4)
              my-namespace:
                apiVersion: messaging.knative.dev/v1beta1
                kind: KafkaChannel
                spec:
                  numPartitions: 1
                  replicationFactor: 1
    1 In spec.config, you can specify the config maps that you want to add modified configurations for.
    2 The default-ch-webhook config map can be used to specify the default channel implementation for the cluster or for one or more namespaces.
    3 The cluster-wide default channel type configuration. In this example, the default channel implementation for the cluster is InMemoryChannel.
    4 The namespace-scoped default channel type configuration. In this example, the default channel implementation for the my-namespace namespace is KafkaChannel.

    Configuring a namespace-specific default overrides any cluster-wide settings.

Overriding system deployment configurations

You can override the default configurations for some specific deployments by modifying the deployments spec in the KnativeServing custom resource (CR). Currently, overriding default configuration settings is supported for the replicas, labels, annotations, and nodeSelector fields.

In the following example, a KnativeServing CR overrides the webhook deployment so that:

  • The deployment has 3 replicas.

  • The label is set to example-label: label.

  • The label example-label: label is added.

  • The nodeSelector field is set to select nodes with the disktype: hdd label.

The KnativeServing CR label and annotation settings override the deployment’s labels and annotations for both the deployment itself and the resulting pods.

KnativeServing CR example
apiVersion: operator.knative.dev/v1alpha1
kind: KnativeServing
metadata:
  name: ks
  namespace: knative-serving
spec:
  high-availability:
    replicas: 2
  deployments:
  - name: webhook
    replicas: 3
    labels:
      example-label: label
    annotations:
      example-annotation: annotation
    nodeSelector:
      disktype: hdd

Configuring the EmptyDir extension

emptyDir volumes are empty volumes that are created when a pod is created, and are used to provide temporary working disk space. emptyDir volumes are deleted when the pod they were created for is deleted.

The kubernetes.podspec-volumes-emptydir extension controls whether emptyDir volumes can be used with Knative Serving. To enable using emptyDir volumes, you must modify the KnativeServing custom resource (CR) to include the following YAML:

Example KnativeServing CR
apiVersion: operator.knative.dev/v1alpha1
kind: KnativeServing
metadata:
  name: knative-serving
spec:
  config:
    features:
      kubernetes.podspec-volumes-emptydir: enabled
...

HTTPS redirection global settings

HTTPS redirection provides redirection for incoming HTTP requests. These redirected HTTP requests are encrypted. You can enable HTTPS redirection for all services on the cluster by configuring the httpProtocol spec for the KnativeServing custom resource (CR).

Example KnativeServing CR that enables HTTPS redirection
apiVersion: operator.knative.dev/v1alpha1
kind: KnativeServing
metadata:
  name: knative-serving
spec:
  config:
    network:
      httpProtocol: "redirected"
...

Setting the URL scheme for external routes

The URL scheme of external routes defaults to HTTPS for enhanced security. This scheme is determined by the default-external-scheme key in the KnativeServing custom resource (CR) spec.

Default spec
...
spec:
  config:
    network:
      default-external-scheme: "https"
...

You can override the default spec to use HTTP by modifying the default-external-scheme key:

HTTP override spec
...
spec:
  config:
    network:
      default-external-scheme: "http"
...

Setting the Kourier Gateway service type

The Kourier Gateway is exposed by default as the ClusterIP service type. This service type is determined by the service-type ingress spec in the KnativeServing custom resource (CR).

Default spec
...
spec:
  ingress:
    kourier:
      service-type: ClusterIP
...

You can override the default service type to use a load balancer service type instead by modifying the service-type spec:

LoadBalancer override spec
...
spec:
  ingress:
    kourier:
      service-type: LoadBalancer
...

Enabling PVC support

Some serverless applications need permanent data storage. To achieve this, you can configure persistent volume claims (PVCs) for your Knative services.

PVC support for Knative services is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.

Procedure
  1. To enable Knative Serving to use PVCs and write to them, modify the KnativeServing custom resource (CR) to include the following YAML:

    Enabling PVCs with write access
    ...
    spec:
      config:
        features:
          "kubernetes.podspec-persistent-volume-claim": enabled
          "kubernetes.podspec-persistent-volume-write": enabled
    ...
    • The kubernetes.podspec-persistent-volume-claim extension controls whether persistent volumes (PVs) can be used with Knative Serving.

    • The kubernetes.podspec-persistent-volume-write extension controls whether PVs are available to Knative Serving with the write access.

  2. To claim a PV, modify your service to include the PV configuration. For example, you might have a persistent volume claim with the following configuration:

    Use the storage class that supports the access mode that you are requesting. For example, you can the ocs-storagecluster-cephfs class for the ReadWriteMany access mode.

    PersistentVolumeClaim configuration
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: example-pv-claim
      namespace: my-ns
    spec:
      accessModes:
        - ReadWriteMany
      storageClassName: ocs-storagecluster-cephfs
      resources:
        requests:
          storage: 1Gi

    In this case, to claim a PV with write access, modify your service as follows:

    Knative service PVC configuration
    apiVersion: serving.knative.dev/v1
    kind: Service
    metadata:
      namespace: my-ns
    ...
    spec:
     template:
       spec:
         containers:
             ...
             volumeMounts: (1)
               - mountPath: /data
                 name: mydata
                 readOnly: false
         volumes:
           - name: mydata
             persistentVolumeClaim: (2)
               claimName: example-pv-claim
               readOnly: false (3)
    1 Volume mount specification.
    2 Persistent volume claim specification.
    3 Flag that enables read-only access.

    To successfully use persistent storage in Knative services, you need additional configuration, such as the user permissions for the Knative container user.

Enabling init containers

Init containers are specialized containers that are run before application containers in a pod. They are generally used to implement initialization logic for an application, which may include running setup scripts or downloading required configurations. You can enable the use of init containers for Knative services by modifying the KnativeServing custom resource (CR).

Init containers for Knative services is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.

Init containers may cause longer application start-up times and should be used with caution for serverless applications, which are expected to scale up and down frequently.

Prerequisites
  • You have installed OpenShift Serverless Operator and Knative Serving on your cluster.

  • You have cluster or dedicated administrator permissions.

Procedure
  • Enable the use of init containers by adding the kubernetes.podspec-init-containers flag to the KnativeServing CR:

    Example KnativeServing CR
    apiVersion: operator.knative.dev/v1alpha1
    kind: KnativeServing
    metadata:
      name: knative-serving
    spec:
      config:
        features:
          kubernetes.podspec-init-containers: enabled
    ...