$ oc create secret -n <namespace> generic <kafka_auth_secret> \
--from-file=ca.crt=caroot.pem \
--from-file=user.crt=certificate.pem \
--from-file=user.key=key.pem
Transport Layer Security (TLS) is used by Apache Kafka clients and servers to encrypt traffic between Knative and Kafka, as well as for authentication. TLS is the only supported method of traffic encryption for Knative Kafka.
You have cluster or dedicated administrator permissions on OpenShift Container Platform.
The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka
CR are installed on your OpenShift Container Platform cluster.
You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
You have a Kafka cluster CA certificate stored as a .pem
file.
You have a Kafka cluster client certificate and a key stored as .pem
files.
Install the OpenShift CLI (oc
).
Create the certificate files as secrets in your chosen namespace:
$ oc create secret -n <namespace> generic <kafka_auth_secret> \
--from-file=ca.crt=caroot.pem \
--from-file=user.crt=certificate.pem \
--from-file=user.key=key.pem
Use the key names |
Start editing the KnativeKafka
custom resource:
$ oc edit knativekafka
Reference your secret and the namespace of the secret:
apiVersion: operator.serverless.openshift.io/v1alpha1
kind: KnativeKafka
metadata:
namespace: knative-eventing
name: knative-kafka
spec:
channel:
authSecretName: <kafka_auth_secret>
authSecretNamespace: <kafka_auth_secret_namespace>
bootstrapServers: <bootstrap_servers>
enabled: true
source:
enabled: true
Make sure to specify the matching port in the bootstrap server. |
For example:
apiVersion: operator.serverless.openshift.io/v1alpha1
kind: KnativeKafka
metadata:
namespace: knative-eventing
name: knative-kafka
spec:
channel:
authSecretName: tls-user
authSecretNamespace: kafka
bootstrapServers: eventing-kafka-bootstrap.kafka.svc:9094
enabled: true
source:
enabled: true
Simple Authentication and Security Layer (SASL) is used by Apache Kafka for authentication. If you use SASL authentication on your cluster, users must provide credentials to Knative for communicating with the Kafka cluster; otherwise events cannot be produced or consumed.
You have cluster or dedicated administrator permissions on OpenShift Container Platform.
The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka
CR are installed on your OpenShift Container Platform cluster.
You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
You have a username and password for a Kafka cluster.
You have chosen the SASL mechanism to use, for example, PLAIN
, SCRAM-SHA-256
, or SCRAM-SHA-512
.
If TLS is enabled, you also need the ca.crt
certificate file for the Kafka cluster.
Install the OpenShift CLI (oc
).
Create the certificate files as secrets in your chosen namespace:
$ oc create secret -n <namespace> generic <kafka_auth_secret> \
--from-file=ca.crt=caroot.pem \
--from-literal=password="SecretPassword" \
--from-literal=saslType="SCRAM-SHA-512" \
--from-literal=user="my-sasl-user"
Use the key names ca.crt
, password
, and sasl.mechanism
. Do not change them.
If you want to use SASL with public CA certificates, you must use the tls.enabled=true
flag, rather than the ca.crt
argument, when creating the secret. For example:
$ oc create secret -n <namespace> generic <kafka_auth_secret> \
--from-literal=tls.enabled=true \
--from-literal=password="SecretPassword" \
--from-literal=saslType="SCRAM-SHA-512" \
--from-literal=user="my-sasl-user"
Start editing the KnativeKafka
custom resource:
$ oc edit knativekafka
Reference your secret and the namespace of the secret:
apiVersion: operator.serverless.openshift.io/v1alpha1
kind: KnativeKafka
metadata:
namespace: knative-eventing
name: knative-kafka
spec:
channel:
authSecretName: <kafka_auth_secret>
authSecretNamespace: <kafka_auth_secret_namespace>
bootstrapServers: <bootstrap_servers>
enabled: true
source:
enabled: true
Make sure to specify the matching port in the bootstrap server. |
For example:
apiVersion: operator.serverless.openshift.io/v1alpha1
kind: KnativeKafka
metadata:
namespace: knative-eventing
name: knative-kafka
spec:
channel:
authSecretName: scram-user
authSecretNamespace: kafka
bootstrapServers: eventing-kafka-bootstrap.kafka.svc:9093
enabled: true
source:
enabled: true