Knative Eventing on OpenShift Container Platform enables developers to use an event-driven architecture with serverless applications. An event-driven architecture is based on the concept of decoupled relationships between event producers and event consumers.
Event producers create events, and event sinks, or consumers, receive events. Knative Eventing uses standard HTTP POST requests to send and receive events between event producers and sinks. These events conform to the CloudEvents specifications, which enables creating, parsing, sending, and receiving events in any programming language.
Knative Eventing supports the following use cases:
You can send events to a broker as an HTTP POST, and use binding to decouple the destination configuration from your application that produces events.
You can use a trigger to consume events from a broker based on event attributes. The application receives events as an HTTP POST.
To enable delivery to multiple types of sinks, Knative Eventing defines the following generic interfaces that can be implemented by multiple Kubernetes resources:
Able to receive and acknowledge an event delivered over HTTP to an address defined in the status.address.url
field of the event. The Kubernetes Service
resource also satisfies the addressable interface.
Able to receive an event delivered over HTTP and transform it, returning 0
or 1
new events in the HTTP response payload. These returned events may be further processed in the same way that events from an external event source are processed.
Knative Kafka provides integration options for you to use supported versions of the Apache Kafka message streaming platform with OpenShift Serverless. Kafka provides options for event source, channel, broker, and event sink capabilities.
Knative Kafka is not currently supported for IBM Z and IBM Power. |
Knative Kafka provides additional options, such as:
Kafka source
Kafka channel
Kafka broker
Kafka sink