$ oc create secret -n <namespace> generic <kafka_auth_secret> \
--from-file=ca.crt=caroot.pem \
--from-literal=password="SecretPassword" \
--from-literal=saslType="SCRAM-SHA-512" \ (1)
--from-literal=user="my-sasl-user"
Knative Kafka provides integration options for you to use supported versions of the Apache Kafka message streaming platform with OpenShift Serverless. Kafka provides options for event source, channel, broker, and event sink capabilities.
In addition to the Knative Eventing components that are provided as part of a core OpenShift Serverless installation, cluster administrators can install the KnativeKafka
custom resource (CR).
Knative Kafka is not currently supported for IBM zSystems and IBM Power. |
The KnativeKafka
CR provides users with additional options, such as:
Kafka source
Kafka channel
Kafka broker
Kafka sink
Simple Authentication and Security Layer (SASL) is used by Apache Kafka for authentication. If you use SASL authentication on your cluster, users must provide credentials to Knative for communicating with the Kafka cluster; otherwise events cannot be produced or consumed.
You have cluster or dedicated administrator permissions on OpenShift Container Platform.
The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka
CR are installed on your OpenShift Container Platform cluster.
You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
You have a username and password for a Kafka cluster.
You have chosen the SASL mechanism to use, for example, PLAIN
, SCRAM-SHA-256
, or SCRAM-SHA-512
.
If TLS is enabled, you also need the ca.crt
certificate file for the Kafka cluster.
You have installed the OpenShift (oc
) CLI.
Create the certificate files as secrets in your chosen namespace:
$ oc create secret -n <namespace> generic <kafka_auth_secret> \
--from-file=ca.crt=caroot.pem \
--from-literal=password="SecretPassword" \
--from-literal=saslType="SCRAM-SHA-512" \ (1)
--from-literal=user="my-sasl-user"
1 | The SASL type can be PLAIN , SCRAM-SHA-256 , or SCRAM-SHA-512 . |
Create or modify your Kafka source so that it contains the following spec
configuration:
apiVersion: sources.knative.dev/v1beta1
kind: KafkaSource
metadata:
name: example-source
spec:
...
net:
sasl:
enable: true
user:
secretKeyRef:
name: <kafka_auth_secret>
key: user
password:
secretKeyRef:
name: <kafka_auth_secret>
key: password
type:
secretKeyRef:
name: <kafka_auth_secret>
key: saslType
tls:
enable: true
caCert: (1)
secretKeyRef:
name: <kafka_auth_secret>
key: ca.crt
...
1 | The caCert spec is not required if you are using a public cloud Kafka service, such as Red Hat OpenShift Streams for Apache Kafka. |
Transport Layer Security (TLS) is used by Apache Kafka clients and servers to encrypt traffic between Knative and Kafka, as well as for authentication. TLS is the only supported method of traffic encryption for Knative Kafka.
Simple Authentication and Security Layer (SASL) is used by Apache Kafka for authentication. If you use SASL authentication on your cluster, users must provide credentials to Knative for communicating with the Kafka cluster; otherwise events cannot be produced or consumed.
The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka
custom resources (CRs) are installed on your OpenShift Container Platform cluster.
Kafka sink is enabled in the KnativeKafka
CR.
You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
You have a Kafka cluster CA certificate stored as a .pem
file.
You have a Kafka cluster client certificate and a key stored as .pem
files.
You have installed the OpenShift (oc
) CLI.
You have chosen the SASL mechanism to use, for example, PLAIN
, SCRAM-SHA-256
, or SCRAM-SHA-512
.
Create the certificate files as a secret in the same namespace as your KafkaSink
object:
Certificates and keys must be in PEM format. |
For authentication using SASL without encryption:
$ oc create secret -n <namespace> generic <secret_name> \
--from-literal=protocol=SASL_PLAINTEXT \
--from-literal=sasl.mechanism=<sasl_mechanism> \
--from-literal=user=<username> \
--from-literal=password=<password>
For authentication using SASL and encryption using TLS:
$ oc create secret -n <namespace> generic <secret_name> \
--from-literal=protocol=SASL_SSL \
--from-literal=sasl.mechanism=<sasl_mechanism> \
--from-file=ca.crt=<my_caroot.pem_file_path> \ (1)
--from-literal=user=<username> \
--from-literal=password=<password>
1 | The ca.crt can be omitted to use the system’s root CA set if you are using a public cloud managed Kafka service, such as Red Hat OpenShift Streams for Apache Kafka. |
For authentication and encryption using TLS:
$ oc create secret -n <namespace> generic <secret_name> \
--from-literal=protocol=SSL \
--from-file=ca.crt=<my_caroot.pem_file_path> \ (1)
--from-file=user.crt=<my_cert.pem_file_path> \
--from-file=user.key=<my_key.pem_file_path>
1 | The ca.crt can be omitted to use the system’s root CA set if you are using a public cloud managed Kafka service, such as Red Hat OpenShift Streams for Apache Kafka. |
Create or modify a KafkaSink
object and add a reference to your secret in the auth
spec:
apiVersion: eventing.knative.dev/v1alpha1
kind: KafkaSink
metadata:
name: <sink_name>
namespace: <namespace>
spec:
...
auth:
secret:
ref:
name: <secret_name>
...
Apply the KafkaSink
object:
$ oc apply -f <filename>
You can configure the replication factor, bootstrap servers, and the number of topic partitions for a Kafka broker, by creating a config map and referencing this config map in the Kafka Broker
object.
You have cluster or dedicated administrator permissions on OpenShift Container Platform.
The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka
custom resource (CR) are installed on your OpenShift Container Platform cluster.
You have created a project or have access to a project that has the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
You have installed the OpenShift CLI (oc
).
Modify the kafka-broker-config
config map, or create your own config map that contains the following configuration:
apiVersion: v1
kind: ConfigMap
metadata:
name: <config_map_name> (1)
namespace: <namespace> (2)
data:
default.topic.partitions: <integer> (3)
default.topic.replication.factor: <integer> (4)
bootstrap.servers: <list_of_servers> (5)
1 | The config map name. |
2 | The namespace where the config map exists. |
3 | The number of topic partitions for the Kafka broker. This controls how quickly events can be sent to the broker. A higher number of partitions requires greater compute resources. |
4 | The replication factor of topic messages. This prevents against data loss. A higher replication factor requires greater compute resources and more storage. |
5 | A comma separated list of bootstrap servers. This can be inside or outside of the OpenShift Container Platform cluster, and is a list of Kafka clusters that the broker receives events from and sends events to. |
The |
apiVersion: v1
kind: ConfigMap
metadata:
name: kafka-broker-config
namespace: knative-eventing
data:
default.topic.partitions: "10"
default.topic.replication.factor: "3"
bootstrap.servers: "my-cluster-kafka-bootstrap.kafka:9092"
Apply the config map:
$ oc apply -f <config_map_filename>
Specify the config map for the Kafka Broker
object:
apiVersion: eventing.knative.dev/v1
kind: Broker
metadata:
name: <broker_name> (1)
namespace: <namespace> (2)
annotations:
eventing.knative.dev/broker.class: Kafka (3)
spec:
config:
apiVersion: v1
kind: ConfigMap
name: <config_map_name> (4)
namespace: <namespace> (5)
...
1 | The broker name. |
2 | The namespace where the broker exists. |
3 | The broker class annotation. In this example, the broker is a Kafka broker that uses the class value Kafka . |
4 | The config map name. |
5 | The namespace where the config map exists. |
Apply the broker:
$ oc apply -f <broker_filename>