Technical Capabilities

The key capabilities you will use in this module are:

  • Camel K, the integration tool to create all the processing flows.

  • AMQ Broker, the messaging broker enabling event-based interactions.

  • streams for Apache Kafka, to store and replay customer/agent interactions.

  • DataGrid (Cache), to keep the context of interactions alive.

  • S3 storage, to store conversations.

    S3 in this workshop is served using Minio for simplicity. Full featured on-premise storage capabilities are provided by OpenShift Data Foundation.


What is Red Hat build of Apache Camel K?

Camel K will be your master weapon in this learning module. It’ll serve you to link sources and targets and process data exchanges.

Camel K is a subproject of Apache Camel, known as the swiss-army knife of integration. Apache Camel is the most popular open source community project aimed at solving all things integration.

Camel K simplifies working with Kubernetes environments so you can get your integrations up and running in a container quickly.


What is Red Hat AMQ Broker?

AMQ Broker will be your core message broker. It enables event-driven interactions between all your Camel integrations.

AMQ Broker is a pure-Java multiprotocol message broker. It’s built on an efficient, asynchronous core with a fast native journal for message persistence and the option of shared-nothing state replication for high availability.


What is Red Hat Data Grid?

Data Grid will be your core context caching capability, to keep the keys to chat-rooms, while conversations (between customers and agents) are alive.

Data Grid is an in-memory, distributed, NoSQL datastore solution. Your applications can access, process, and analyze data at in-memory speed to deliver a superior user experience.


What is Red Hat streams for Apache Kafka?

Red Hat streams for Apache Kafka will be your core event stream platform where customer/agents sessions (the streams) will be stored (for later replays).

Red Hat streams for Apache Kafka is an event streaming platform that aggregates events over time (streams). This allows applications to replay the streams for various purposes, for example, data analysis or to discover patterns, among others.


What is Red Hat OpenShift Data Foundation?

You will use S3 storage in this learning module as your core capability, for regulatory archival requirements, to keep record of Globex’s support activities.

OpenShift Data Foundation is a data management solution that provides higher level data services and persistent storage for Red Hat OpenShift. It provides File, Block and Object Storage, with builtin Data Protection and Disaster Recovery.

Applications can function and interact with data in a simplified, consistent and scalable manner.

Multi-channel Support Service Implementation

In the next chapter, you will be guided through the implementation and deployment of the Multi-channel Support Service service. Of course this entails way more than can be achieved during a workshop, so instead most components are already in place, and you will focus on a couple of key activities to deploy and run the solution.

Proceed to the instructions for this module.