Event driven applications with Serverless Knative Eventing - Instructions

1. Overview of activities

Here is an outline of the activities you will achieve as part of this module.

  • Gain understanding of what has been already setup for you

  • Deploy the Quarkus Knative Services needed for the solution

  • Build an event mesh with OpenShift Serverless Eventing components

  • Try out the solution end-to-end to see how data flows through the various systems and services

2. Prepare your Development environment

2.1. Ensure lab readiness

Before your proceed it is critical that your lab environment is completely ready before executing the lab instructions.

  • Access the Workshop Deployer browser tab and check if the Launch new channels using Contract-First approach has turned green. This indicates that the module has been fully deployed and is ready to use.

serverless workshop deployer
Figure 1. Module Readiness on Workshop Deployer

3. Deployment overview and introduction

  • In a browser window, navigate to the {openshift_cluster_console}[OpenShift console, window="console"]. If needed, login with your username and password ({user_name}/{user_password}).
    If this is the first time you open the console, you will be directed to the Developer Perspective of the console, which shows you the different namespaces you have access to.

    serverless openshift landing
    • globex-serverless-{user_name} namespace contains the components you will be working on in this lab

    • globex-gitops-{user_name} namespace contains the ArgoCD elements for GitOps. This is described in detail in the Introduction module.

    • globex-mw-{user_name} contains Middleware deployments including AMQ Streams (Apache Kafka)

    • globex-{user_name} namespace contains other services the application is dependent on. This is described in detail in the Introduction module.

  • Click on the globex-serverless-{user_name} namespace link to select the namespace you are going to use in this lab, and select Topology from the left menu.

  • The Topology view displays the components that have been setup for you already. Note that this screenshot shows the topology after the components have been moved around to be displayed well here.

    serverless namespace initial toplogy
  • Let us look at a few of the current components. The rest of the components are discussed in the next sections

    Component

    Function

    globex-web

    This is a NodeJS + Angular web application which serves as the front end website. This app relies on the core services deployed in the globex-{user_name} namespace and the product-reviews services described below

    product-reviews

    This Quarkus based service is a backend for the globex-web web application.

    reviews-db

    Postgresql DB to store the reviews once they are moderated.

    serverless-kafka-secret

    For the purpose of this workshop, this job copies the Kafka credentials (kafka-secret Secret) into the current globex-serverless-{user_name} namespace.

3.1. Red Hat AMQ Streams setup overview

  • Red Hat Streams based on Apache Kafka has been setup for you already. The Kafka broker is installed in the globex-mw-{user_name} namespace.In the same namespace, AMQ streams console, a web UI for viewing Kafka topics and browsing consumer groups, is also installed.

  • Click to navigate to AMQ streams console.

    • This redirects you to the AMQ streams console login page.

    • For the purpose of this workshop, choose Sign in with Anonymous Session to access the console if you are not already signed in.

  • Navigate to the navigate to Kafka Clusters → kafka → Topics. Notice that there are 3 topics which are relevant to this module (you can filter with the word reviews).

    amqstreams console 3topics
  • Here is what each of these topics are meant for:

    Kafka Topic

    Function

    globex.reviews

    When a user submits a review, that review is produced to this topic with Knative Eventing framework.

    reviews.moderated

    Reviews which are moderated are produced to this topic to be further persisted in a database.

    reviews.sentiment

    Holds the reviews after analysis with a sentiment score.

3.2. Introducing OpenShift Serverless for event-driven architecture

When a user submits a product review from the Globex website, the destination for the review is Red Hat AMQ Streams (Apache Kafka). Similarly there are a number of other services which are involved in this solution which produce and consume Kafka messages to moderate and analyse these reviews in (close to) real-time. To make it easier to build out such an event-driven architecture we introduce OpenShift Serverless (based on Knative Eventing).

The services which will need to be Kafka Clients and Producers, now need to only use HTTP. Knative Eventing uses standard HTTP requests to send and receive events between these event producers and sinks. These events must conform to the CloudEvents specifications, which enables creating, parsing, sending, and receiving events in any programming language.

[Click to know] What is CloudEvents?

CloudEvents is a specification for describing event data in a common way. An event includes context and data about an occurrence. Each occurrence is uniquely identified by its data. The headers within a CloudEvents event helps Knative Eventing to route the events to the right destination.

4. Deploy Knative Services

As a first step, let us ensure all the services are created. A few of the services are predeployed in your OpenShift cluster. These services are built using the Knative Serving framework.

[Click to know] What is Knative Serving?

OpenShift Serverless, with Knative Serving, makes it easy to define and control how serverless workload behaves on the Kubernetes cluster. With just one Kubernetes Custom Resource Definitions (CRDs) all the primary resources (Services, Routes, Configurations, and Revisions) are created and managed. Knative Serving supports rapid deployment of serverless containers, autoscaling, including scaling pods down to zero.

  • There are 3 Knative Services that are used in this solution

    Service name

    Function

    aiml-moderate-reviews

    Checks if the product reviews are acceptable or abusive, and assigns a score (0 or 1) accordingly. The reviews with a score are sent to reviews.moderated topic,

    aiml-sentiment-reviews

    Analyses the product reviews and creates a new message with a sentiment score; The reviews with a sentiment score are sent to reviews.sentiment topic. This service is predeployed.

    persist-reviews

    Persists all moderated reviews to a Postgresql database. You will deploy this service in the next steps.

  • Navigate to the OpenShift Console’s Administrator view, and go to Serverless > Serving menu. You will see that two of the above mentioned services have been already deployed (aiml-moderate-reviews and aiml-sentiment-reviews).
    In the next step, you will create the persist-reviews Knative service using a Custom Resource Definition (CRD).

    serverless 2knative services
  • Create persist-reviews: Click on the Create button highlighted in the screenshot above. Choose Create > Service option. Replace all existing content with the following YAML file, and click on Save

    create knative service
    apiVersion: serving.knative.dev/v1
    kind: Service
    metadata:
      name: persist-reviews
      namespace: globex-serverless-{user_name}
    spec:
      template:
        metadata:
          annotations:
            autoscaling.knative.dev/min-scale: "1"
        spec:
          containers:
            - image: quay.io/globex-sentiment-analysis/persist-reviews:latest
              volumeMounts:
                - mountPath: /deployments/config
                  name: config
                  readOnly: true
          volumes:
            - name: config
              secret:
                secretName: persist-reviews
  • Navigate back to the {openshift_cluster_console}/topology/ns/globex-serverless-{user_name}?view=graph[Developer > Topology, window="console", target="console"] view of the globex-serverless-{user_name} namespace and you will notice all the three Knative services

    3knative service
  • A few interesting points to notes with the newly created persist-reviews

    • This service is shown with a dark blue colour because of the annotation autoscaling.knative.dev/min-scale: "1" added in the YAML while creation of this service. This means a minimum of one pod is running all the time, instead of it scaling down to zero (0) like the other two services.

    • With just providing the container image, Knative Serving creates all the other needed Kubernetes resources (Services, Routes, Configurations, and Revisions) - making it easier for developers to create such services quickly.

5. Connect Knative Services to Kafka using Knative Eventing

In this section we will connect the Knative Services (refer to previous section) to Kafka using Knative Sink and SinkBinding.

[Click to know] What is Knative Sink and SinkBinding ?
  • A Kafka Sink for Apache Kafka helps in persisting the incoming Kafka message (CloudEvent) to a configurable Apache Kafka topic. Event producers (such as apps, devices) can send CloudEvents over HTTP to the Kafka Sinks there by reducing the complexity of new protocols and message formats for app developers. The Kafka Sinks then send the CloudEvents they receive to the configured Apache Kafka topic.

  • SinkBinding supports decoupling the source (service which produces events) from the actual sink. The SinkBinding object injects environment variables (such as sink URL) into the services there by decoupling the source from the sink.

5.1. Create Sink and SinkBinding

This solution needs a number of Sinks and SinkBinding for the various Kafka topics described in an earlier section. You will create one of them here, while the others have been preconfigured for you.

Here is a visual of how the reviews flows from the User to Kafka with Knative eventing.

  • The reviews submitted by the user are sent to the product_reviews Quarkus service through HTTP POST.

  • The product_reviews service sends this review as a CloudEvent to the reviews-sink Kafka Sink over HTTP.

  • The Quarkus service remains agnostic to the internals of the Kafka streaming platform.

  • The reviews-sink Kafka Sink sends this Cloud Event to the globex.reviews Kafka topic.

reviews keventing kafka

Now, go ahead and create the Sink and SinkBinding.

  • Click on the (+) icon found on top of the OpenShift Console to access the Import YAML wizard.

console add yaml
  • Copy the following CRD into the Import YAML form, and click Create to create the KafkaSink reviews-sink which will send messages to globex.reviews Kafka Topic.

    Click to see a visual
    create sink
    apiVersion: eventing.knative.dev/v1alpha1
    kind: KafkaSink
    metadata:
      name: reviews-sink
      namespace: globex-serverless-{user_name}
    spec:
      bootstrapServers:
        - kafka-kafka-bootstrap.globex-mw-{user_name}.svc.cluster.local:9092
      topic: globex.reviews
      numPartitions: 1
      contentMode: binary
      auth:
         secret:
           ref:
             name: kafka-secret
  • Use the Import YAML form to create a Sink Binding from the product-reviews Quarkus Service to the KafkaSink reviews-sink that you created in the previous step.

    apiVersion: sources.knative.dev/v1
    kind: SinkBinding
    metadata:
      name: product-reviews-to-reviews-sink
      namespace: globex-serverless-{user_name}
    spec:
      sink:
        ref:
          apiVersion: eventing.knative.dev/v1alpha1
          kind: KafkaSink
          name: reviews-sink
          namespace: globex-serverless-{user_name}
      subject:
        apiVersion: apps/v1
        kind: Deployment
        name: product-reviews
        namespace: globex-serverless-{user_name}
  • Navigate back to the {openshift_cluster_console}/topology/ns/globex-serverless-{user_name}?view=graph[Topology View, window="console", target="console"], to view the new Sink and SinkBinding you created

    Click to see a visual
    sink sinkb created
  • Here is the list of all the Kafka Sinks used in this solution.

    Sink name

    Function

    reviews-sink

    Send the reviews submitted by user (HTTP POST from globex-web app to product-reviews Quarkus service) as CloudEvents to globex.reviews Kafka topic

    moderated-reviews-sink

    Sends reviews moderated by the aiml-moderate-reviews service to topic reviews.moderated

    reviews-sentiment-sink

    Sends sentiment score of reviews by the aiml-sentiment-reviews service to topic reviews.sentiment

5.2. Create Knative Broker and Triggers

The next step is to set up the Knative components that can invoke the HTTP endpoint of the services (aiml-moderate-reviews, aiml-sentiment-reviews & persist-reviews) whenever a new event occurs due to a product review being submitted. This is performed by using the components Knative Source, Broker and Triggers.

[Click to know] What is Knative Source, Broker and Triggers?
  • KafkaSource reads messages in existing Apache Kafka topics, and sends those messages (CloudEvents format) to a Knative Broker for Kafka.

  • Brokers provide a discoverable endpoint for incoming events, and use Triggers for event delivery.

  • A Trigger subscribes to events from a specific broker, filters them based on CloudEvents headers, and delivers them to a Knative service’s HTTP endpoint.

5.2.1. Create Knative Broker

  • Click on the (+) icon found on top of the OpenShift Console to access the Import YAML wizard.

  • Copy the following YAML (CRD) and click Create to create a Knative broker.
    Note: There is just one broker for the entire solution, which will use triggers to route them to the right services thereby building a realtime event mesh.

    apiVersion: eventing.knative.dev/v1
    kind: Broker
    metadata:
      name: globex-broker
      namespace: globex-serverless-{user_name}

5.2.2. Create Knative source

  • Click on the (+) icon found on top of the OpenShift Console to access the Import YAML wizard.

  • Copy the following YAML to create a Knative KafkaSource.
    Note that this KafkaSource reads from the specific three (3) topics that are defined in the YAML below, and refers to the globex-broker you created in the previous step.

    apiVersion: sources.knative.dev/v1beta1
    kind: KafkaSource
    metadata:
      name: kafka-source
      namespace: globex-serverless-{user_name}
    spec:
      bootstrapServers:
        - 'kafka-kafka-bootstrap.globex-mw-{user_name}.svc.cluster.local:9092'
      topics:
        - globex.reviews
        - reviews.moderated
        - reviews.sentiment
      net:
        sasl:
          enable: true
          password:
            secretKeyRef:
              key: password
              name: kafka-secret
          type:
            secretKeyRef:
              key: sasl.mechanism
              name: kafka-secret
          user:
            secretKeyRef:
              key: user
              name: kafka-secret
        tls:
          caCert: {}
          cert: {}
          key: {}
      sink:
        ref:
          apiVersion: eventing.knative.dev/v1
          kind: Broker
          name: globex-broker
          namespace: globex-serverless-{user_name}
  • The kafka-source is created and the Conditions are all true denoting that the creation is a success.

    Click to see a visual
    kafkasource created
  • Navigate back to the {openshift_cluster_console}/topology/ns/globex-serverless-{user_name}?view=graph[Topology View, window="console"], to view the new Source and Broker you created.

    Click to see a visual
    source broker topology

5.2.3. Create Knative triggers

You will now create triggers which will invoke the HTTP endpoint of Knative services depending on the CloudEvents headers.
Each CloudEvents created will be tagged with specific values in the headers ce-type and ce-source which is then used by the Trigger to route them to the correct service HTTP endpoint

  • Click on the (+) icon found on top of the OpenShift Console to access the Import YAML wizard.

  • Copy and paste the following CRD to create the 3 Triggers matching the 3 Knative services

    apiVersion: eventing.knative.dev/v1
    kind: Trigger
    metadata:
      name: persist-reviews-trigger
      namespace: globex-serverless-{user_name}
    spec:
      broker: globex-broker
      filter:
        attributes:
          source: review-moderated
          type: review-moderated-event
      subscriber:
        ref:
          apiVersion: serving.knative.dev/v1
          kind: Service
          name: persist-reviews
        uri: /review/submit
    
    ---
    apiVersion: eventing.knative.dev/v1
    kind: Trigger
    metadata:
      name: moderate-reviews-trigger
      namespace: globex-serverless-{user_name}
    spec:
      broker: globex-broker
      filter:
        attributes:
          source: submit-review
          type: submit-review-event
      subscriber:
        ref:
          apiVersion: serving.knative.dev/v1
          kind: Service
          name: aiml-moderate-reviews
        uri: /analyze
    ---
    apiVersion: eventing.knative.dev/v1
    kind: Trigger
    metadata:
      name: sentiment-reviews-trigger
      namespace: globex-serverless-{user_name}
    spec:
      broker: globex-broker
      filter:
        attributes:
          source: submit-review
          type: submit-review-event
      subscriber:
        ref:
          apiVersion: serving.knative.dev/v1
          kind: Service
          name: aiml-sentiment-reviews
        uri: /analyze
  • You will note the triggers have been created successfully

    Click to see a visual
    triggers created
  • Navigate back to the {openshift_cluster_console}/topology/ns/globex-serverless-{user_name}?view=graph[Topology View, window="console"], to view the new triggers you created

    Click to see a visual
    triggers create topology
  • Click on the Broker globex-broker to view how the three Knative services subscribe to the KnativeBroker using the Triggers; also note the various filters applied to the triggers.
    These filters are the ones which help to match the CloudEvents header of each message to the right service which will act on the message.

broker service filters

6. Test the Review Moderation and Sentiment Analysis

  • You have now completed the setup of all the components needed. Navigate to {openshift_cluster_console}/topology/ns/globex-serverless-{user_name}?view=graph[Topology View, window="console"] to view the final topology.

serverless namespace final toplogy
  • To open the Globex web application, click on the openshift console open url symbol next to the globex-web deployment in the topology view.

    serverless launch webapp toplogy
  • Click on the Login link on the top-right corner of the home page

    webapp login
  • You will be navigated to the Keycloak login page

If Keycloak login page doesn’t show up, you might be already logged as a different user. Please click on Logout, and login to this page again.

  • Login using any of the following usernames. The password is openshift for all these users.

    • asilva (or) mmiller (or) asanders (or) cjones (or) pwong

      webapp login keycloak
  • Click on the Cool Stuff Store link on the top-menu to view the list of products available

    webapp products
  • Click on any product to view the details page.

  • Type a review comment and click on Submit.

    webapp products details
  • If the review comment is appropriate it will then appear in the same page after a few seconds.

    webapp products view review
  • In the OpenShift Developer> Topology view, you will also notice that the Knative services have all turned fully blue because they have been triggered by the reviews submission and so have scaled up.
    In a few seconds two of them (except persist-reviews) will go back to a white ring denoting that they have been scaled down to zero since they are not in use anymore.

    reviews knative services
  • Now, go ahead and leave review comments of as many products as you like. If you are feeling adventurous you can try a few inappropriate comments too to see how they are being moderated ;)

    • Any inappropriate commment is replaced with a note from Globex and displayed on the websiite as shown below

      inapprop review

6.1. Under the hood: Step through Review moderation flow

  • Navigate to AMQ streams console. Filter for the term reviews to view 3 related topics.

  • Click on the globex.reviews topic to see an Overview of the topic page

    amqstreams globex reviews topic overview
  • Click on any of the message to view the complete message payload

    amqstreams globex reviews topic detail
  • Click on the headers tab and take note of the headers of the message. This is what each of them mean:

    amqstreams globex reviews headers
    • ce_id: 1 - This is a unique id for each message.

    • ce_source: submit-review and ce_type: submit-review-event - These are the primary values which are used by the Knative triggers to route the message to the right Knative service.

  • Navigate back to the {openshift_cluster_console}/topology/ns/globex-serverless-{user_name}?view=graph[Topology View, window="console"], to view the corresponding mapping in the Knative Broker and Triggers

    • Click on the blue link (highlighted in blue below) pointing to aiml-moderate-reviews service. This link represents the moderate-reviews-trigger.

    • The right-hand panel shows the trigger’s source = submit-review and type = submit-review-event.

    • You will note that this matches the CloudEvents headers in the Kafka message that you viewed in AMQ Streams Console browser.

    • This is how the Knative Triggers match the messages to the right endpoint.

      moderate reviews trigger
  • Once the reviews are sent to the aiml-moderate-reviews (Python) service, it uses the Hate-speech-CNERG/english-abusive-MuRIL AI/ML model to identify if the product review is abusive or not.

    • A score of -1 is assigned if the review is acceptable or 0 if the comment is abusive.

    • This service then POSTs the review with the score to the moderated-reviews-sink (with the help of the ServiceBinding which binds the sink to the services).

      moderated reviews sink
    • This sink is configured to write to the reviews.moderated topic. Here is a sample message of how a moderated review (kafka message) looks like in the AMQ streams console.

      amqstreams moderate review score
  • Next, the message is sent to the persist-reviews Quarkus service through the persist-reviews-trigger trigger. This service persists the review in a Postgresql DB if the score less than 0 (that is, the review is acceptable)

    • Note that the trigger’s filter’s source and type matches the ce_type and ce_source headers of the message from the reviews.moderated topic shown in the screenshot above.

      persist reviews trigger

6.2. Under the hood: Review sentiment analysis

The Review sentiment analysis flow is quite similar to the Moderate Review flow.

review sentiment flow
  • The sentiment-reviews-trigger responds to the same CloudEvents filter headers as the moderate-reviews-trigger; this is because when a review is submitted, they need to be processed by both the moderate and analyse services.

    sentiment reviews trigger
  • The aiml-sentiment-reviews which is invoked, then uses the nlptown/bert-base-multilingual-uncased-sentiment to identify a score (from -1 to 4) depending on the tone of the review.

  • The review is then sent to the reviews.sentiment topic. Access this topic in the AMQ streams console. Click on any of the kafka messages to view the sentiment score.

    amqstreams sentiment score
    • As a next step, this sentiment score can be used to build a dashboard to visualise the sentiment of various categories of products.

      Click to see a sample visual
      globex dashboard sample

7. Congratulations

Congratulations! With this you have completed the Event Driven Applications workshop module!

Please close all but the Workshop Deployer browser tab to avoid proliferation of browser tabs which can make working on other modules difficult.

Proceed to the Workshop Deployer to choose your next module.