Apache Kafka on OpenShift Serverless is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.

The Apache Kafka event source brings messages into Knative. It reads events from an Apache Kafka cluster and passes these events to an event sink so that they can be consumed. You can use the KafkaSource event source with OpenShift Serverless.

Creating a Kafka event source by using the web console

You can create and verify a Kafka event source from the OpenShift Container Platform web console.

Prerequisites
  • The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka custom resource are installed on your cluster.

  • You have logged in to the web console.

  • You are in the Developer perspective.

  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.

Procedure
  1. Navigate to the Add page and select Event Source.

  2. In the Event Sources page, select Kafka Source in the Type section.

  3. Configure the Kafka Source settings:

    1. Add a comma-separated list of Bootstrap Servers.

    2. Add a comma-separated list of Topics.

    3. Add a Consumer Group.

    4. Select the Service Account Name for the service account that you created.

    5. Select the Sink for the event source. A Sink can be either a Resource, such as a channel, broker, or service, or a URI.

    6. Enter a Name for the Kafka event source.

  4. Click Create.

Verification

You can verify that the Kafka event source was created and is connected to the sink by viewing the Topology page.

  1. In the Developer perspective, navigate to Topology.

  2. View the Kafka event source and sink.

    View the Kafka source and service in the Topology view

Creating a Kafka event source by using the kn CLI

This section describes how to create a Kafka event source by using the kn command.

Creating a Kafka event source by using the `kn` CLI is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.

Prerequisites
  • The OpenShift Serverless Operator, Knative Eventing, Knative Serving, and the KnativeKafka custom resource are installed on your cluster.

  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.

  • You have access to a Red Hat AMQ Streams (Kafka) cluster that produces the Kafka messages you want to import.

Procedure
  1. To verify that the Kafka event source is working, create a Knative service that dumps incoming events into the service logs:

    $ kn service create event-display \
        --image quay.io/openshift-knative/knative-eventing-sources-event-display
  2. Create a KafkaSource resource:

    $ kn source kafka create mykafkasrc \
        --servers my-cluster-kafka-bootstrap.kafka.svc:9092 \
        --topics my-topic --consumergroup my-consumer-group \
        --sink event-display

    The --servers, --topics, and --consumergroup options specify the connection parameters to the Kafka cluster. The --consumergroup option is optional.

  3. Optional: View details about the KafkaSource resource you created:

    $ kn source kafka describe mykafkasrc
    Example output
    Name:              mykafkasrc
    Namespace:         kafka
    Age:               1h
    BootstrapServers:  my-cluster-kafka-bootstrap.kafka.svc:9092
    Topics:            my-topic
    ConsumerGroup:     my-consumer-group
    
    Sink:
      Name:       event-display
      Namespace:  default
      Resource:   Service (serving.knative.dev/v1)
    
    Conditions:
      OK TYPE            AGE REASON
      ++ Ready            1h
      ++ Deployed         1h
      ++ SinkProvided     1h
Verification steps
  1. Trigger the Kafka instance to send a message to the topic:

    $ oc -n kafka run kafka-producer \
        -ti --image=quay.io/strimzi/kafka:latest-kafka-2.7.0 --rm=true \
        --restart=Never -- bin/kafka-console-producer.sh \
        --broker-list my-cluster-kafka-bootstrap:9092 --topic my-topic

    Enter the message in the prompt. This command assumes that:

    • The Kafka cluster is installed in the kafka namespace.

    • The KafkaSource object has been configured to use the my-topic topic.

  2. Verify that the message arrived by viewing the logs:

    $ oc logs $(oc get pod -o name | grep event-display) -c user-container
    Example output
    ☁️  cloudevents.Event
    Validation: valid
    Context Attributes,
      specversion: 1.0
      type: dev.knative.kafka.event
      source: /apis/v1/namespaces/default/kafkasources/mykafkasrc#my-topic
      subject: partition:46#0
      id: partition:46/offset:0
      time: 2021-03-10T11:21:49.4Z
    Extensions,
      traceparent: 00-161ff3815727d8755848ec01c866d1cd-7ff3916c44334678-00
    Data,
      Hello!

Creating a Kafka event source by using YAML

You can create a Kafka event source by using YAML.

Prerequisites
  • The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka custom resource are installed on your cluster.

  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.

Procedure
  1. Create a YAML file containing the following:

    apiVersion: sources.knative.dev/v1beta1
    kind: KafkaSource
    metadata:
      name: <source-name>
    spec:
      consumerGroup: <group-name> (1)
      bootstrapServers:
      - <list-of-bootstrap-servers>
      topics:
      - <list-of-topics> (2)
      sink:
    1 A consumer group is a group of consumers that use the same group ID, and consume data from a topic.
    2 A topic provides a destination for the storage of data. Each topic is split into one or more partitions.
    Example KafkaSource object
    apiVersion: sources.knative.dev/v1beta1
    kind: KafkaSource
    metadata:
      name: kafka-source
    spec:
      consumerGroup: knative-group
      bootstrapServers:
      - my-cluster-kafka-bootstrap.kafka:9092
      topics:
      - knative-demo-topic
      sink:
        ref:
          apiVersion: serving.knative.dev/v1
          kind: Service
          name: event-display
  2. Apply the YAML file:

    $ oc apply -f <filename>
Verification
  • Verify that the Kafka event source was created:

    $ oc get pods
    Example output
    ---
    NAME                                    READY     STATUS    RESTARTS   AGE
    kafkasource-kafka-source-5ca0248f-...   1/1       Running   0          13m
    ---

Additional resources