×

When you create an event source, you can specify a sink where events are sent to from the source. A sink is an addressable or a callable resource that can receive incoming events from other resources. Knative services, channels and brokers are all examples of sinks.

Addressable objects receive and acknowledge an event delivered over HTTP to an address defined in their status.address.url field. As a special case, the core Kubernetes Service object also fulfills the addressable interface.

Callable objects are able to receive an event delivered over HTTP and transform the event, returning 0 or 1 new events in the HTTP response. These returned events may be further processed in the same way that events from an external event source are processed.

Creating a Kafka event sink

As a developer, you can create an event sink to receive events from a particular source and send them to a Kafka topic.

Prerequisites
  • You have installed the Red Hat OpenShift Serverless operator, with Knative Serving, Knative Eventing, and Knative Kafka APIs, from the Operator Hub.

  • You have created a Kafka topic in your Kafka environment.

Procedure
  1. In the Developer perspective, navigate to the +Add view.

  2. Click Event Sink in the Eventing catalog.

  3. Search for KafkaSink in the catalog items and click it.

  4. Click Create Event Sink.

  5. In the form view, type the URL of the bootstrap server, which is a combination of host name and port.

    create event sink
  6. Type the name of the topic to send event data.

  7. Type the name of the event sink.

  8. Click Create.

Verification
  1. In the Developer perspective, navigate to the Topology view.

  2. Click the created event sink to view its details in the right panel.

Knative CLI sink flag

When you create an event source by using the Knative (kn) CLI, you can specify a sink where events are sent to from that resource by using the --sink flag. The sink can be any addressable or callable resource that can receive incoming events from other resources.

The following example creates a sink binding that uses a service, http://event-display.svc.cluster.local, as the sink:

Example command using the sink flag
$ kn source binding create bind-heartbeat \
  --namespace sinkbinding-example \
  --subject "Job:batch/v1:app=heartbeat-cron" \
  --sink http://event-display.svc.cluster.local \ (1)
  --ce-override "sink=bound"
1 svc in http://event-display.svc.cluster.local determines that the sink is a Knative service. Other default sink prefixes include channel, and broker.

You can configure which CRs can be used with the --sink flag for Knative (kn) CLI commands by Customizing kn.

Connect an event source to a sink using the Developer perspective

When you create an event source by using the OpenShift Dedicated web console, you can specify a sink that events are sent to from that source. The sink can be any addressable or callable resource that can receive incoming events from other resources.

Prerequisites
  • The OpenShift Serverless Operator, Knative Serving, and Knative Eventing are installed on your OpenShift Dedicated cluster.

  • You have logged in to the web console and are in the Developer perspective.

  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Dedicated.

  • You have created a sink, such as a Knative service, channel or broker.

Procedure
  1. Create an event source of any type, by navigating to +AddEvent Source and selecting the event source type that you want to create.

  2. In the Sink section of the Create Event Source form view, select your sink in the Resource list.

  3. Click Create.

Verification

You can verify that the event source was created and is connected to the sink by viewing the Topology page.

  1. In the Developer perspective, navigate to Topology.

  2. View the event source and click the connected sink to see the sink details in the right panel.

Connecting a trigger to a sink

You can connect a trigger to a sink, so that events from a broker are filtered before they are sent to the sink. A sink that is connected to a trigger is configured as a subscriber in the Trigger object’s resource spec.

Example of a Trigger object connected to a Kafka sink
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
  name: <trigger_name> (1)
spec:
...
  subscriber:
    ref:
      apiVersion: eventing.knative.dev/v1alpha1
      kind: KafkaSink
      name: <kafka_sink_name> (2)
1 The name of the trigger being connected to the sink.
2 The name of a KafkaSink object.